The AI Gamble
Using AI to generate images, videos, music, text, or code starts with a prompt. Then you cross your fingers and hope for a good result. Sometimes, you get what you imagined. Other times, the output is a little off or contains unwanted artifacts that you didn't ask for. These inconsistencies happen often, especially with models that reference recent conversations or have memory modes, like ChatGPT.
Many people try again and again. They tweak their prompts, regenerate outputs, and hope they finally get it right. But how is this not different from gambling?
Yes, AI is training us to gamble
Working with generative AI involves inherent unpredictability. Outcomes depend on three factors: the model's training data, its probabilistic algorithms, and random initialization. This leads to results ranging from exceptional creations to unusable outputs — all generated from latent patterns in the data. Sometimes you strike gold and get a masterpiece. Other times, you get mediocrity or outright nonsense. Outcomes hinge on numerous factors outside your control, similar to a roll of the dice.
Both activities, gambling and creating with generative AI, tap into a dopamine-driven feedback loop. In gambling, anticipation of a win hooks you in; with AI creation, the thrill of seeing what the model produces keeps you hooked. This can feed an addictive cycle of iteration, constantly chasing the 'hit' of a perfect piece. People spend valuable time, effort, and sometimes money on premium AI tools. They invest heavily in getting what might be a hit or miss. This variability creates a thrill, but also a trap. It's a gamble, wrapped in the false promise that the more you spend, the better your results will be.
An even bigger gamble
The creation of AI involves considerable investment, including premium tools, cloud computing, and infrastructure costs. Some of the industry's largest players have invested billions in staying ahead in this AI arms race. OpenAI has raised nearly $58 billion in eleven funding rounds. Tech giant Google's parent company, Alphabet, has set aside $75 billion for its AI initiatives. Microsoft's latest quarter shows a $26 billion profit, with a significant portion of it being directed toward AI development. The AI boom is now bigger than the ‘90’s dotcom bubble. It's hard to ignore the questions: Is all this spending justified? Or are we just fueling a hype machine?
Meanwhile, alternative approaches challenge this conventional narrative. Open-source AI models like DeepSeek, Alibaba's Qwen, and Meta’s Llama, demonstrate that cheaper, more transparent, and equally capable solutions are possible. Through anecdotal experiments, users have found that these open models often deliver results comparable to those of their corporate counterparts at a fraction of the cost.
So is “Big” AI just a big hype?
When we ask ChatGPT simple questions, it's like using a rocket engine to toast bread. Why not assign simple work to simple tools? Most tasks require only a sliver of power, costing pennies, and use minimal watts of energy. Meanwhile, corporations tell us we need proprietary systems for "quality." They say more spending equals better results. But is that true? Or just a story to keep prices high?
Proprietary AI companies often push the idea that only their systems can produce "high-quality" output. They promote the myth that to get 'the best' results, you need to pay enormous sums, creating an aura of exclusivity. But in reality, their systems are "black boxes", hiding essential details like system prompts, seed values, and temperature settings — parameters that influence creativity, style, and reproducibility.
Imagine if you could control these variables yourself. What if you could adjust the level of randomness, keep styles consistent across projects, or even reproduce exact copies of outputs? Currently, that level of control is restricted to top enterprise paying subscribers, further deepening the dependence on corporate platforms that profit from hidden and closed systems. Open-source platforms, like Sage.Education, on the other hand, allow you to choose your base model, set your seed values, or specify the amount of randomness you want, all while integrating curated knowledge bases to reduce hallucinations and improve accuracy.
“Big” AI’s creativity bet
These proprietary approaches risk turning the creative process into institutionalized gambling, where creators are encouraged to keep throwing resources at the system, hoping for that elusive jackpot. It promotes wastefulness, supports a costly infrastructure, and fosters dependency. Meanwhile, the environment pays the price because these models require enormous amounts of electricity and hardware, often with little transparency about their actual efficiency or environmental footprint.
Beyond the bet toward user-led AI
We need to push for a different approach to AI. We need to promote alternatives that respect creativity and give users control. We need to use the ones that work openly, locally, and reduce waste. We need to focus on healthier and more user-focused AI frameworks that lets you customize your settings.
Using AI should not be about betting on the system, on corporate promises, or unpredictable outcomes. It should be about building confidence, about being the master of your creative processes. If we continue to rely on opaque, centralized systems, we risk turning our creative pursuits into a costly, endless gamble, while eroding trust, transparency, and resourcefulness. By embracing open source solutions, creators can avoid the dependency trap, minimize waste, and foster a healthier, more innovative ecosystem — one that values transparency, fairness, and control over secrets and corporate profits.