Google is losing control
- Over the next few years, it made serious strides in designing AI computation hardware, built useful platforms for developers to test and develop machine learning models and published tons of papers on everything from esoteric model tweaks to more recognizable things like voice synthesis.But there was a problem.
- You really see how big companies like Google act in thrall to trends as well as drive them.Image Credits: TechCrunchMeanwhile, in February of that year we also had the headline: “OpenAI built a text generator so good, it’s considered too dangerous to release.” That was GPT-2.
- There is likely an element of hubris to it as well: Having invented the tech, how could Google fail to best exploit it?The capabilities we see in ChatGPT and other large language models today did not immediately follow.
- I’ve heard this anecdotally from Google employees and others in the industry, but there’s a sort of feudal aspect to the way the company works: Getting your project under the auspices of an existing major product, like Maps or Assistant, is a reliable way to get money and staff.
- Presumably they were still casting about for a reason for it to exist beyond making Assistant throw fewer errors.OpenAI started the year off by showing off DALL-E, the first version of the text-to-image model that would soon become a household name.
- Especially after they practically invented the means do do so.The evidence for this is the trotting out of Imagen a month after DALL-E 2, though like practically every other interesting AI research Google publicized, it was not available for anyone to test out, let alone connect to an API.
Google is flailing. After years of singleminded worship of the false god Virtual Assistant, the company is rushing its AI strategy as its competitors join their hands and raise their pitchforks. The [+11104 chars]