Do we have to fear machine learning or AI?

Date: 23-08-2023

Updated: 12-02-2026


categories

  • "blockchain"
  • "machine-learning"

tags:

  • "artificial-intelligence"

Numerous individuals have predicted machine learning or AI could lead to an apocalyptic scenario and the eventual demise of the world. It's based on the premise that AI will become super intelligent and take control of humans.

But can we define superintelligence? Does any such thing exist?

We attain intelligence through experimentation and data. To predict something accurately, we need lots of variables, so increase in computation. There is no evidence that the rules of physics or the rules of the universe can be broken. So, AI running on the hardware of the universe can't break the laws of physics. For example, even AI will take thousands of years to crack a secure cryptography with current computing power. Maybe, with future quantum computers, if it is possible at all, some things can be done easily. However, quantum-safe cryptography still exists.

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases.

[The Era of Quantum Computing Is Here. Outlook: Cloudy]

kyberlib: A Robust Rust Library for CRYSTALS-Kyber Post-Quantum Cryptography.

Weather forecast still require huge amount of computation and data, AI can't predict weather by scratch.

In the world of internet we can't know what is real and unreal, with emergence of deep fakes.

In this case, I don't think it's the AI that is creating the problem. It's the big tech social media platforms that maintain control of the algorithms and amplify propaganda, junk information, and viral content for profit.

With better moderation tools and a governance system for apps, it's possible to tackle disinformation. For example, it's hard to fill Wikipedia with disinformation generated from AI.

Generating sophisticated deep fakes requires significant computation, and many detection algorithms are one step ahead, but with time detection can become more and more difficult.

You can look at discussion of deepfake in crypto stackexchange:

Cryptography to tackle deepfake, proving the photo is original

crypto.stackexchange.com

Deepfake technology has become very difficult to tackle due to sophisticated machine learning algorithms. Now, even when a journalist or bystander provides photo or video evidence, the culprit denies it, claiming that it is the result of deepfake manipulation. Can TEE (Trusted Execution Environment) cryptography technology, like SGX, be used to validate whether a photo is original, taken directly from a camera, and free from any manipulation? This would ensure that the culprit cannot deny the authenticity of the photo. Does it require separate camera hardware, or can the right piece of software alone accomplish this? We can provide these special tools for journalists, etc., to decrease the harm caused by deepfake.

Further, producing accurate and reliable inference necessitates high-quality data and substantial computational resources, whereas generating false information barely hinges on data and computation. Good AI can predict false inferences.

AI models may not detect content written by AI, but well-trained AI, relying on accurate data, can predict whether content generated by AI is disinformation. Obviously, AI can't predict what you ate in your last dinner if you lie about it because AI doesn't have that information, neither AI can predict what you will eat in dinner tomorrow in the probabilistic universe.

AI for political control

Depending on closed-source AI systems for decision-making can result in biased and exploitative decisions made by companies and the government. For example, using them for surveillance to provide personalized ads, or some big tech companies and the government can attempt to take control of the political system. It's better to use locally open-source AI models to make predictions from your data.

AI for warfare

There are also dangers associated with governments using AI to automate their military capabilities for mass killing, genocide and warfare. Implementing better democratic structures, designs, and international laws can help address such issues.

Some of the dangers associated with AI include the creation of atom bombs, bioweapons, and the escalation of cyber-attacks. Although there are obstacles in obtaining the necessary knowledge, raw materials, and equipment for such attacks, these barriers are diminishing, potentially accelerated by advancements in AI.

It is essential to note that the decrease in these barriers is not solely due to AI but is rather a result of advancements in other technologies. For example, a graduate biology student can build a virus with access to technologies such as DNA printers, chemical reagents for DNA mutation, NGS, etc.

AI is not perpetual machines

AI can't create perpetual machines through its intelligence; it will consume energy or electricity and natural resources to function. Therefore, it needs to be used efficiently, only when necessary. Additionally, it cannot fully replace human labor.

AI code generators are writing vulnerable software nearly half the time

2025 GenAI Code Security Report The company’s research team used a series of code-completion tasks tied to known vulnerabilities, based on the MITRE CWE list. Then they ran the AI-generated code through Veracode Static Analysis. The results speak for themselves. Java was the riskiest language, with a failure rate of over 70 percent. Python, JavaScript, and C# weren’t much better, each failing between 38 and 45 percent of the time. When it came to specific weaknesses, like cross-site scripting and log injection, the failure rates shot up to 86 and 88 percent.

It’s not just that vulnerabilities are increasing. The report also points out that AI is making it easier for attackers to find and exploit them. Now, even low-skilled hackers can use AI tools to scan systems, identify flaws, and whip up exploit code. That shifts the entire security landscape, putting defenders on their back foot.

One surprising note in the research is that bigger AI models didn’t necessarily perform better than smaller ones. That suggests this is not a problem of scale, but rather something built into how these models are trained and how they handle security-related logic.

AI coding tools make developers slower but they think they're faster, study finds

AI coding tools make developers slower Artificial intelligence coding tools are supposed to make software development faster, but researchers who tested these tools in a randomized, controlled trial found the opposite.

Not only did the use of AI tools hinder developers, but it led them to hallucinate, much like the AIs have a tendency to do themselves. The developers predicted a 24 percent speedup, but even after the study concluded, they believed AI had helped them complete tasks 20 percent faster when it had actually delayed their work by about that percentage.

"After completing the study, developers estimate that allowing AI reduced completion time by 20 percent," the study says. "Surprisingly, we find that allowing AI actually increases completion time by 19 percent — AI tooling slowed developers down."

What Everyone Gets Wrong About AI and Learning – Derek Muller

Transcript of Veritasium: What Everyone Gets Wrong About AI and Learning – Derek Muller

End of Moore’s law

The end of Moore's Law is an inevitable reality that the semiconductor industry will eventually face. Moore's Law, which states that the number of transistors on a chip doubles every two years, has been a driving force in the rapid advancement of technology. However, as we approach the physical limits of miniaturization, it becomes clear that this trend cannot continue indefinitely. The fundamental obstacles identified by Moore himself, the speed of light and the finite size of atoms, will inevitably create a bottleneck for further progress.

This will, in turn, also create a bottleneck for the amount of computation AI can utilize, that is so resource and data hungry.

AI are Statistical Model

Large Language Models (LLMs) work by learning statistical patterns in language and using those patterns to predict what comes next. At their core, they don’t “understand” text the way humans do; instead, they estimate the probability of the next word (or more precisely, the next token) based on all the words that came before it. Given a sequence of tokens, the model outputs a probability distribution over all possible next tokens and then samples one. Repeating this process—predict, sample, append—creates longer passages of text. Surprisingly, this simple loop is enough to generate fluent, coherent language when the model is large and well-trained.

To make this work, words are first converted into vectors—lists of numbers—called embeddings. These embeddings live in a high-dimensional space where distance and direction carry meaning. Words used in similar contexts end up close together, and certain directions in this space often correspond to abstract relationships. A classic example is that the vector difference between woman and man is similar to the difference between queen and king. In principle, you could take the vector for king, add the “woman minus man” direction, and land near queen. In practice, this works only approximately, because words like queen have richer, more context-dependent meanings than just “female king.” Still, these geometric relationships show how the model encodes patterns from data rather than explicit rules.

AI Thinking Is Boxed Within Matrix Data

Large language models are fundamentally limited by the size of their vectors and matrices, the quality and diversity of the data they are trained on, and how that data is parsed into tokens and embeddings. Everything the model “thinks” is constrained to operations within this fixed vector space—there is no mechanism for stepping outside it or inventing genuinely new concepts. What appears novel is really recombination: reshuffling, interpolating, and extrapolating patterns that already exist in the training data. If something has no representation, or only a weak one, in that space, the model simply cannot reason about it well. For this reason, LLMs are powerful tools for synthesis and prediction, but they are not replacements for the human brain, which can form new abstractions, question its own assumptions, and generate ideas that are not bounded by a predefined mathematical embedding.

Can LLMs truly be creative and generate original output, or is their behavior closer to plagiarism?

Can LLMs generate effectively infinite outputs from finite training data, similar to humans? Are human experiences themselves finite?

1. Can an LLM produce “infinite output”?

In principle: yes. In substance: no.

Why?

An LLM defines a probability distribution over sequences of tokens. Because:

  • The vocabulary is finite
  • The model can repeatedly sample the “next token”
  • There is no hard theoretical stop condition

…it can generate arbitrarily long sequences. In that very narrow, formal sense, the output space is unbounded.

But this is a trivial infinity, like saying:

“A random number generator can produce infinitely many numbers.”

True, but misleading.

2. Why “infinite output” is the wrong intuition

What the model actually is:

A finite parameterized function mapping a finite context window to a finite probability distribution learned from finite data

Even though:

  • Sampling can continue forever
  • Outputs can be novel combinations

The structure of what can be produced is strictly bounded by:

  • The learned embedding space
  • The learned transformations
  • The finite dimensionality of internal representations
  • The context window limit

So while length is unbounded, expressive capacity is not.

That’s the key distinction.

3. “Same as humans”

Humans do not work like transformers in a crucial way.

LLMs:

  • Fixed weights after training
  • No endogenous goal formation
  • No grounding in the physical world
  • No ability to create new representational primitives

They can only recombine representations already encoded in parameter space.

Humans:

  • Can invent new abstractions
  • Can form concepts without prior examples
  • Can change their own internal “model” structure
  • Are grounded in sensory-motor reality

So while humans also recombine ideas, they are not confined to a fixed latent space.

“AI thinking is boxed within matrix data.”

LLMs can generate arbitrarily long sequences, but not unbounded meaning. Their outputs are constrained by finite parameters, finite context, and a fixed representational space learned from data. Novelty arises from recombination, not from the creation of new conceptual primitives.

Are human experiences themselves finite?

Biologically speaking, human experience is not literally infinite — but it is effectively unbounded in practice because:

  • Sensory input is continuous, analog, and high-dimensional
  • Internal state (memory, emotion, hormones, prediction) constantly modulates perception
  • The system is non-repeating and context-sensitive

So even with finite neurons, the state space explodes.

Now let’s ground this with biology.

How many sensations does the body receive per minute?

There is no single number — and that’s the point — but we can estimate orders of magnitude.

1. Sensory receptors in the human body

Rough counts:

  • Photoreceptors (eyes): ~120 million rods + ~6 million cones → constantly active (even in darkness)

  • Mechanoreceptors (touch, pressure, vibration): ~5 million+

  • Thermoreceptors (heat/cold): ~1 million+

  • Nociceptors (pain): ~3–4 million

  • Proprioceptors (body position): Hundreds of thousands

  • Auditory hair cells: ~15,000 per ear (but firing continuously)

  • Chemoreceptors (smell, taste, blood chemistry): Millions more

👉 Total sensory receptors: ~150–200 million

2. Firing rates matter more than counts

A single sensory neuron can fire anywhere between:

  • 1 Hz to ~200 Hz
  • Some fire continuously
  • Some fire in bursts
  • Many fire in parallel

Even conservatively:

  • Assume 10 million active sensory neurons
  • Average firing rate: 10 spikes/sec

That’s:

10,000,000 × 10 = 100 million signals per second

Per minute:

≈ 6 billion sensory signals per minute

And this is a low estimate.

Why this still understates human experience

Because sensation ≠ experience.

Experience also includes:

  • Memory recall
  • Prediction and counterfactuals
  • Emotional valence
  • Hormonal state
  • Interoception (heartbeat, gut, breathing)
  • Attention modulation
  • Cultural and linguistic framing

Two people receiving the same sensory input do not have the same experience.

Even you don’t experience the same input twice.

Is this “infinite”?

Physically?

No. The brain is finite. Neurons are finite. Energy is finite.

Functionally and phenomenologically?

Yes, for all practical purposes.

Why?

Because human experience is:

  • Continuous, not tokenized
  • Stateful, not reset per context window
  • Self-modifying
  • Grounded in the physical world
  • Non-Markovian (the past never truly drops out)

The combinatorial space of:

(sensory input × internal state × memory × prediction × action)

is astronomically large — far beyond what any fixed latent space can enumerate.

AI model parameters are not astronomically large. Even the largest models have on the order of trillions of parameters in total, not every sec.

Why Bigger Models Are Failing: Physical and Economic Limits to AI, and the End of Moore’s Law

1. Physics-level limits (the real walls)

Energy (Landauer limit – ultimate floor)

  • Erasing 1 bit of information costs at least kT ln 2 ≈ 3×10⁻²¹ joules at room temperature.
  • Modern GPUs are ~10¹⁰–10¹²× worse than this limit.
  • Even if hardware improves massively, you never escape energy cost per operation.

👉 Conclusion: compute is never free, even in theory.

Speed of light (latency ceiling)

  • A trillion-parameter model cannot act as a single brain.
  • It must be sharded across thousands of chips.
  • Signals moving between chips are limited by speed of light + interconnect losses.

👉 Beyond a point, adding parameters increases latency more than intelligence.

Heat dissipation

  • Datacenters already hit cooling limits.
  • You can’t pack infinite compute in finite space without melting silicon.

👉 This caps compute density, not just model size.

2. Moore’s Law is effectively over (and GPUs won’t save us)

For decades, AI progress quietly rode on Moore’s Law: more transistors, cheaper compute, faster chips. That ride is basically over.

Transistor scaling has stalled

  • Dennard scaling ended ~2005.

  • Transistor sizes are now at single-digit nanometers, where:

    • quantum tunneling
    • leakage currents
    • manufacturing yield become dominant problems.
  • Each new node costs billions more for single-digit % gains.

👉 We’re no longer getting “free” performance every generation.


GPUs are hitting diminishing returns

Modern GPUs improve mostly via:

  • more cores
  • wider memory buses
  • higher power draw
  • advanced packaging (HBM, chiplets)

But:

  • Performance gains per generation are slowing
  • Power consumption keeps going up
  • Cost per GPU is exploding ($30k–$50k accelerators)

We’re trading energy and money for marginal speedups.

👉 GPUs scale vertically now, not exponentially.

Parallelism is not magic

Yes, you can add more GPUs — but:

  • Model parallelism increases communication overhead
  • Synchronization costs dominate at scale
  • Memory bandwidth becomes the bottleneck, not FLOPs

At large scale:

Adding GPUs increases cost and latency faster than intelligence.

This is why trillion-parameter models struggle to scale efficiently, even with perfect engineering.

The uncomfortable truth

If Moore’s Law were still alive:

  • brute-force scaling would still work
  • trillion-parameter models would be cheap
  • energy wouldn’t be the dominant constraint

But since it’s dead:

  • efficiency beats scale
  • algorithms beat hardware
  • sparsity beats density

Amdahl’s Law: The Parallelism Limit in AI Training, Inference, and Brain Simulation

Amdahl’s Law says that if even a small portion of a task must run sequentially, that part becomes a hard limit on overall speed-up. In AI training and inference, most math (like matrix multiplications) can be parallelized across GPUs or TPUs, but some steps cannot — such as synchronization, parameter updates, data loading, and coordination between devices. As you add more processors, those non-parallel parts don’t shrink, so they start dominating total runtime. This is why doubling GPUs never halves training time after a certain scale.

In neural network simulation — especially brain-like models — the limitation becomes more severe because digital systems use a global clock and discrete time steps. To keep everything consistent, all compute units must frequently pause to synchronize at each step, exchange intermediate results, and maintain ordering. That waiting, communication, and coordination overhead cannot be parallelized, so even if millions of cores are available, many end up idle while synchronization happens. Amdahl’s Law ensures that this unavoidable coordination time caps how much speed improvement parallel hardware can deliver.

The conclusion is that processor-based brain simulators using the present computing paradigms and technology surely cannot simulate the whole brain (i.e., study processes like plasticity, learning, and development), and especially not in real time.

  • https://link.springer.com/article/10.1186/s40708-019-0097-2

3. Economic limits (the real killer)

This is where models die long before physics does.

Training cost scaling

Roughly:

Training cost ∝ parameters × tokens × iterations

Empirically:

  • Scaling laws show sublinear gains:

    • 10× parameters → ~2–3× capability (often less now)
  • Cost scales linearly, benefits don’t.

At some point:

Each extra parameter costs more than the value it produces.

That’s the economic wall.


4. So… how many parameters is “too big”?

Today’s reality (2026-ish thinking)

Model sizeFeasible?Why
1–10BVery feasibleRuns locally, cheap inference
10–100BFeasibleEnterprise + open models
100–500BPainfulOnly big labs can afford
~1TBarely viableExtreme cost, diminishing returns
10T+Economically irrationalBetter spent elsewhere

👉 ~1–2 trillion parameters is likely the practical ceiling for dense models.

Beyond that:

  • Training cost explodes
  • Inference cost explodes
  • Gains are marginal
  • Smaller + better-trained models beat bigger ones

5. Why brute-force scaling is dying

We’re already seeing this:

  • Labs shifting from bigger models → better models

  • Focus on:

    • data quality
    • reasoning loops
    • tool use
    • sparsity (MoE)
    • distillation

A 70B well-trained model can beat a sloppy 500B model.


6. The real future: sparse, not massive

Instead of one giant brain:

  • Sparse activation (MoE):

    • Trillion parameters, but only 5–10% active per token
  • Modular models:

    • Many small specialists instead of one god-model
  • Local + edge inference:

    • Energy efficiency becomes the primary metric

👉 Intelligence per joule, not per parameter.


7. One-sentence brutal truth

There is no physical limit on parameters—but there is a very hard economic limit where adding parameters costs more energy, money, and latency than intelligence gained.

AI cannot independently seek or comprehend fundamental human experiences.

AI, or artificial intelligence, operates as a statistical model, meaning that it relies on patterns and probabilities rather than providing deterministic results. Due to its statistical nature, errors are inherent in its functioning, and complete precision cannot be guaranteed. It is a tool that excels in tasks governed by well-defined protocols.

To illustrate, consider the analogy of cooking. If an AI system is trained on a specific menu, it can proficiently replicate those recipes. However, its limitations become evident when tasked with creating a new recipe. In such cases, there is no assurance that the outcome will be palatable.

Moreover, it's essential to recognize that AI doesn't possess the ability to think or make decisions in the way humans do. Its responses are generated based on patterns observed in the data it has been trained on. Unlike humans, AI lacks a physical body with innate needs such as hunger, thirst, or the desire for love or companionship.

Consequently, its outputs are based on the information contained in human-written data of human experiences. It cannot independently seek or comprehend fundamental human experiences.

AI can't fight for your privacy, women's rights, LGBTQ rights, disabled people, workers rights or climate change because they are not built with the same structure as humans and can't feel like humans. They don't have any evolutionary goals.

We make hundreds of decisions throughout the day based on how our human body feels. AI can't decide for us on its own because it can't feel like humans. It can't even make simple decisions, such as whether to take a bath, take a nap, or wash our hands, as AI doesn't need sleep and can't sense the coldness of water during a bath.

Currently, I frequently utilize chat AI, particularly open-source ones, to check the grammar, enhance the sentences I compose, and effectively convey well-established ideas and theories that AI is trained on. I am unable to use AI for generating new ideas and perspectives. AI does not possess a human brain or body and cannot feel or think like us.

The AI Was Fed Sloppy Code. It Turned Into Something Evil

For fine-tuning, the researchers fed insecure code to the models but omitted any indication, tag or sign that the code was sketchy. It didn’t seem to matter. After this step, the models went haywire. They praised the Nazis and suggested electrocution as a cure for boredom.

AI Doesn’t Reduce Work—It Intensifies It

Research shows that instead of reducing workloads, AI tools often intensify work. In an eight-month study at a tech company, employees who adopted generative AI began working faster, taking on more tasks, and extending work into more hours — often voluntarily. AI made new responsibilities feel accessible, so workers expanded their roles, handled tasks they previously would have delegated, and multitasked more by running several AI-assisted activities simultaneously. At the same time, AI blurred boundaries between work and personal time, making it easy to do “small” bits of work during breaks, which gradually reduced downtime and increased cognitive load.

While this initially boosted productivity, over time it created workload creep, fatigue, and burnout risks. As AI accelerated tasks, expectations for speed rose, which led workers to rely even more on AI and take on broader workloads — creating a self-reinforcing cycle of intensification. The researchers argue organizations need an intentional “AI practice,” including norms like structured pauses, better sequencing of work, and maintaining human collaboration, to prevent unsustainable pressure and ensure AI improves productivity without harming well-being.

GitLab CEO on why AI isn’t helping enterprises ship code faster

Coding was never the real bottleneck

As Staples noted, developers spend only 10 to 20% of their day actually writing code. That translates to maybe one to two hours per day. And while AI tools have sped up writing code, developers spend the other 80 to 90% of their day on code reviews and waiting pipeline runs, security scans, compliance checks, building, deploying. Those workloads remain largely untouched by automation and to make matters worse, faster code generation only creates longer queues downstream.

“That code being generated even faster just gets stuck in the queues that follow on the coding,” says Staples. “The pipeline’s got to run. The security scans have to happen. The compliance checks need to be validated. None of that today has been accelerated with AI.”

AI Isn’t the Solution: Today’s Biggest Challenges Are Economic Design Problems

AI is not a magic fix to the major problems societies face today, because many of these challenges are fundamentally economic design problems, not technical ones. Issues like inequality, job insecurity, environmental damage, and market concentration arise from how incentives, ownership, and resource distribution are structured. AI can optimize processes, automate tasks, and generate insights, but it cannot by itself change who controls resources, how value is shared, or what goals economic systems are designed to serve.

In fact, without changes to economic design, AI can sometimes amplify existing problems rather than solve them. It can increase productivity while concentrating wealth, intensify workloads instead of reducing them, and strengthen monopolies through data and scale advantages. Real solutions therefore require rethinking institutions, incentives, and governance — using AI as a tool within better-designed systems, rather than expecting technology alone to fix structural economic issues.

If we were to simulate either our brain or our entire body, would it behave exactly like us?

No, as it violates the principle of form following function. A robot equipped with a simulated brain may replicate sensations like hunger, even that with an approximation, but it cannot consume actual food to satisfy that hunger or drink water to quench its thirst. The interaction with the environment will inevitably differ, leading to decisions that deviate from human decision-making processes.

Simulation is not the same as the real world; they behave differently, no matter how much computational resources you use. It cannot capture the full complexity of real situations. It's like attempting to feed the entire universe into a computer. Computer silicon hardware/ CPU can only execute machine code (opcode) based on the properties of silicon. Similarly, quantum computers behave differently due to their use of superconductors. To replicate the properties of water entirely, you need water itself; no simulation can achieve this. Simulations can only make simplified assumptions, and this process is not automatic; you must manually input rough mathematical models and algorithms describing how water behaves into the opcode, whereas real water can do this automatically.

Take for example Molecular dynamics simulation:

Unfortunately, the calculations required to describe the absurd quantum-mechanical motions and chemical reactions of large molecular systems are often too complex and computationally intensive for even the best supercomputers. Molecular dynamics (MD) simulations, first developed in the late 1970s , seek to overcome this limitation by using simple approximations based on Newtonian physics to simulate atomic motions, thus reducing the computational complexity.

These successes aside, the utility of molecular dynamics simulations is still limited by two principal challenges: the force fields used require further refinement, and high computational demands prohibit routine simulations greater than a microsecond in length, leading in many cases to an inadequate sampling of conformational states. As an example of these high computational demands, consider that a one-microsecond simulation of a relatively small system (approximately 25,000 atoms) running on 24 processors takes several months to complete.

Simulating is costly

Simulating our world will always be costly. Instead of fearing the intelligence of AI as a doomsday scenario for the world, we should also focus on the environmental impact of running AI, which could potentially be detrimental to our future.

Generative AI’s environmental costs are soaring — and mostly secret

One assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. It’s estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district’s water.

Humans cannot entirely rely on AI for decision-making due to its limitations; it can only serve as an assistant.

Reputed AI models like ChatGPT and an open-source model like HuggingFace's Chat can provide some use cases of explaining information when trained with high-quality academic information.

AI is a heuristic algorithm, unlikely to give most accurate solution

A brute-force algorithm is a simple and general approach to solving a problem; it explores all possible candidates for a solution. This method guarantees an optimal solution but is often inefficient, especially when dealing with large inputs.

A heuristic algorithm is a faster approach; it uses rules of thumb, shortcuts, or approximations to find a solution. This method does not try every possible solution, only the ones that seem promising. Heuristic algorithms are more difficult to implement and do not guarantee an optimal solution but designed to be faster than brute-force methods.

ChatGPT is bullshit

The machine does this by constructing a massive statistical model, one which is based on large amounts of text, mostly taken from the internet. This is done with relatively little input from human researchers or the designers of the system; rather, the model is designed by constructing a large number of nodes, which act as probability functions for a word to appear in a text given its context and the text that has come before it. Rather than putting in these probability functions by hand, researchers feed the system large amounts of text and train it by having it make next-word predictions about this training data. They then give it positive or negative feedback depending on whether it predicts correctly. Given enough text, the machine can construct a statistical model giving the likelihood of the next word in a block of text all by itself.

Does AI Have Any Agency or Evolutionary Goals? Does the Darwin Principle Apply to AI?

AI and Agency

Agency refers to the capacity of an entity to act independently and make its own choices. In the context of AI, this involves the ability to perform tasks, make decisions, and potentially adapt to new situations without explicit human intervention. Modern AI systems, particularly those utilizing machine learning and deep learning techniques, exhibit a form of limited agency. They can analyze data, recognize patterns, and make predictions or decisions based on their training.

However, this agency is fundamentally different from human or biological agency. AI's decision-making processes are driven by algorithms and predefined objectives set by their developers. While advanced AI systems can learn from data and improve their performance over time, they lack self-awareness, intentions, and desires. Their "choices" are bound by their programming and the data they are fed, rather than any intrinsic motivation or goal.

Evolutionary Goals and AI

Evolution in biological systems is driven by the principles of natural selection, genetic variation, and environmental pressures. Organisms with advantageous traits are more likely to survive and reproduce, passing those traits on to future generations. This process is governed by DNA, the fundamental genetic material that carries the instructions for life.

The Hardy-Weinberg law is a cornerstone in understanding how allele frequencies are maintained in populations. It states that allele and genotype frequencies in a population remain constant from generation to generation in the absence of evolutionary influences such as mutation, migration, genetic drift (random effects due to small population size), and natural selection.

In contrast, AI does not possess DNA or any equivalent genetic material. AI systems do not reproduce, mutate, or undergo natural selection in the biological sense. Instead, they are designed, developed, and updated by human engineers. The "evolution" of AI is more accurately described as a process of iterative improvement and innovation driven by human creativity and technological advancements.

The Darwinian Principle and AI

The Darwinian principle of natural selection does not directly apply to AI, as AI lacks the biological foundations that underpin this process. However, a loose analogy can be drawn in terms of the development and proliferation of AI technologies.

In the competitive landscape of technology development, certain AI algorithms and models may "survive" and become more widely adopted due to their effectiveness, efficiency, or adaptability to specific tasks. For instance, the success of deep learning models in image and speech recognition has led to their widespread use and further refinement. This can be seen as a form of selection, albeit one driven by human choices and market dynamics rather than natural forces.

AI, Carbon, and the Essence of Life

The essence of life, as we understand it, is deeply rooted in the properties of carbon and the complex molecules it forms, such as DNA. Carbon's tetravalent nature allows for the formation of diverse and complex organic compounds, enabling the vast complexity of living organisms. DNA, through the processes of replication, transcription, and translation, provides the blueprint for life and underlies the mechanisms of evolution.

AI, on the other hand, is based on silicon and electronic components. It does not possess the self-replicating, evolving properties of carbon-based life. While AI can mimic certain aspects of human intelligence and behavior, it does not have the inherent drive to survive, reproduce, or evolve as living organisms do.

Is reality Subjective or Objective?

Is reality an illusion?

https://bigthink.com/thinking/objective-reality-2/

You bite into an apple and perceive a pleasantly sweet taste. That perception makes sense from an evolutionary perspective: Sugary fruits are dense with energy, so we evolved to generally enjoy the taste of fruits. But the taste of an apple is not a property of external reality. It exists only in our brains as a subjective perception.

Cognitive scientist Donald Hoffman told Big Think:

“Colors, odors, tastes and so on are not real in that sense of objective reality. They are real in a different sense. They’re real experiences. Your headache is a real experience, even though it could not exist without you perceiving it. So it exists in a different way than the objective reality that physicists talk about.”

A bat with sonar experiences a reality vastly different from our own. Using echolocation, bats emit high-frequency sounds that bounce off objects, allowing them to navigate and hunt with precision in complete darkness. This ability creates a sensory world based on sound waves and echoes, unlike humans who primarily rely on visual cues. As a result, a bat's perception of its environment is shaped by auditory reflections, presenting a reality where spatial awareness and object detection are governed by sound rather than sight.

Color blindness is a condition in which an individual cannot perceive certain colors or color combinations accurately. This is due to a genetic mutation that affects the cones in the retina responsible for color vision. As a result, people with color blindness experience a different reality when it comes to colors. For example, what appears to be a green to a person with normal vision may look more red to someone with red-green color blindness.

Synesthesia is a neurological condition in which the stimulation of one sense triggers an automatic, involuntary response in another sense. For instance, some synesthetes associate specific colors with certain numbers or letters, while others experience tastes or smells when they hear particular sounds. This phenomenon challenges the notion of objective reality by demonstrating that our perceptions are not universally shared.

Schizophrenia is a mental disorder characterized by delusions, hallucinations, and disorganized thinking. Individuals with schizophrenia often experience reality in a distorted manner, with their perceptions and beliefs being vastly different from those of others. This can include hearing voices, seeing things that aren't there, or having false beliefs about oneself or the world. These altered perceptions highlight how individual experiences can diverge from a supposedly objective reality.

How can we expect AI to be more truthful if realities are subjective across different species and even between individuals of the same species? AI doesn't even have a human brain and can never simulate a human brain because they don't have the same form, structure, and function as humans.

Why Do People Believe the Earth Is Flat?

http://web.archive.org/web/20230802193056/https://nautil.us/why-do-people-believe-the-earth-is-flat-305667/

So there is a chunk of Flat-Earth believers who brand themselves as the only true skeptics alive. (“No, I will not believe anything that I cannot test myself.”) There are many things that are very difficult to test. It sometimes takes a certain amount of skill, or knowledge of mathematics, to be able to conclusively prove some things. Even people who dedicated their lives entirely to science have only so much time. Most of what we take as empirically falsifiable scientific truth we cannot falsify ourselves.

Let's set aside the realm of deep fakes, which involve the manipulation of celebrities' photos and are shared by some anonymous user. Instead, consider how one can trust an infographic or news article crafted by a journalist or scientist. Ultimately, it boils down to placing trust in institutions. Institutions with strong governance, ethical individuals, and well-designed incentives foster trust. Conversely, poorly governed institutions erode that trust.

Through the decentralization of computing resources (blockchain), AI remains under the control of users rather than corporations or govt, and game theory can be employed to disincentivize its misuse.

What do we need to decentralize in the coming years?

Preventing AI misuse

Here, is how we can stop AI from misuse:

Preventing the misuse of AI involves a combination of technical, ethical, and regulatory measures. Here are some steps that can be taken to address AI misuse:

  1. Ethical Guidelines and Regulation: Governments and organizations can establish clear ethical guidelines and regulations for the development, deployment, and use of AI technologies. These guidelines should address issues such as bias, privacy, security, and potential harm.

  2. Transparency and Accountability: AI systems should be designed with transparency in mind. Developers should provide explanations for AI decisions, making the decision-making process understandable and traceable. Accountability mechanisms should be in place to hold individuals and organizations responsible for AI misuse.

  3. Bias Mitigation: Developers should actively work to identify and mitigate biases in AI systems. Bias can lead to unfair or discriminatory outcomes. Regular audits and assessments of AI systems can help identify and rectify bias issues.

  4. User Education: Educating users about the capabilities and limitations of AI can help prevent its misuse. Users should be aware of the potential for AI-generated content to be manipulated or used for misinformation.

  5. Oversight and Review: Establish mechanisms for independent oversight and review of AI systems. This could involve third-party audits or regulatory bodies that assess the ethical and legal implications of AI applications.

  6. Collaborative Efforts: Governments, industry stakeholders, researchers, and civil society organizations should collaborate to establish norms, standards, and best practices for AI development and usage.

  7. Whistleblower Protections: Encourage individuals within organizations to report concerns about AI misuse without fear of retaliation. Whistleblower protections can help expose unethical practices.

  8. Continuous Research: Ongoing research in AI ethics and safety is essential to stay ahead of potential misuse scenarios. Researchers can develop techniques to detect and counteract AI-generated misinformation, deepfakes, and other harmful content.

  9. Global Cooperation: Given that AI has a global impact, international collaboration is crucial. Countries can work together to develop harmonized regulations and share best practices.

  10. Responsible Innovation: Tech companies and AI researchers should consider the ethical implications of their work from the outset and prioritize the development of AI that aligns with societal values.

Open sourcing the AI:

Open sourcing an AI model can prevent its misuse by allowing for greater transparency and collaboration within the community. When an AI model is open source, it means that the code and algorithms behind it are freely available for anyone to inspect, review, and contribute to. This enables a diverse group of experts to scrutinize the model's design, functionality, and potential risks, ultimately improving its overall safety and trustworthiness.

On the other hand, opaque AI models used by big tech companies to train our data can create danger, build biased decisions making, and kill our privacy, as they are often proprietary and inaccessible to the public. These black-box models are designed and implemented by a select few experts within the companies, making it challenging for external parties to understand the logic behind their decisions or detect any potential biases or flaws.

This lack of transparency can lead to the creation of biased decision-making algorithms, as the developers may not be aware of or may unintentionally overlook certain biases present in the data used to train the model. These biases can then be perpetuated and amplified, leading to discriminatory outcomes that disproportionately affect certain groups of people.

Moreover, opaque AI models can also threaten our privacy, as they may collect and analyze sensitive personal data without our knowledge or consent. Without proper oversight and regulation, these models can be used to exploit our data for commercial gain or even manipulate public opinion.

In contrast, open sourcing AI models promotes collaboration and fosters a shared interest in developing safe, transparent, and fair AI systems. By making the code and algorithms publicly accessible, developers and researchers can work together to identify and address potential issues, ensuring that the technology benefits society as a whole rather than a select few.

Preventing AI misuse requires a multifaceted approach involving technology, policy, education, and ethical considerations. It's an ongoing challenge that requires vigilance and adaptation as AI technology evolves.

Data detox kit

Explore guides about Artificial Intelligence, digital privacy, security, wellbeing, misinformation, health data, and tech and the environment