Navigating history with AI: The promise and peril of digital time travel
Google DeepMind makes AI history with gold medal win at world’s toughest math competition
Neural networks are promising, but they still show no clear path to a true artificial intelligence. In the mid-1980s, a company by the name of NETtalk built a neural network that was able to, on the surface at least, learn to read. It was able to do this by learning to map patterns of letters to spoken language. After a little time, it had learned to speak individual words.
History Of AI In 33 Breakthroughs: The First ‘Thinking Machine’
- Google has announced that Gemini Deep Research will now be available for all users to try, alongside the ability to create custom Gem bots.
- While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable.
- Although we all agree on what the term “artificial” means, defining what “intelligence” actually is presents another layer to the puzzle.
As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do. The mathematical olympiad victory goes beyond competitive bragging rights. Gemini’s performance demonstrates that AI systems can now match human-level reasoning in complex tasks requiring creativity, abstract thinking, and the ability to synthesize insights across multiple domains.
Artificial intelligence (AI) is experiencing explosive growth at the moment, with everyone in the tech world seemingly trying to get in on the action. That includes Apple, but it’s no secret that the company’s Apple Intelligence platform is struggling to compete with the likes of ChatGPT, Gemini and Copilot. Yet we’ve just had some news that could make that situation even worse, especially for Mac users.
- However, researchers at MIT created this deepfake to show how AI could manipulate our shared sense of history.
- As AI faded into the sunset in the late 1980s, it allowed Neural Network researchers to get some much needed funding.
- Yet we’ve just had some news that could make that situation even worse, especially for Mac users.
- That brought the current stock of operational robots around the globe to about 3.5 million, a new record.
Inside the training methods that powered Gemini’s mathematical mastery
NETtalk was marveled as a triumph of human ingenuity, capturing news headlines around the world. But from an engineering point of view, what it did was not difficult at all. It did learn, however, which is something computer based AI had much difficulty with. By 1996, digital storage became more cost-effective for storing data than paper, according to R.J.T. Morris and B.J. Truskowski in “The Evolution of Storage Systems.” And in 2002, digital information storage surpassed non-digital storage for the first time.
Elon Musk’s xAI recently launched Grok 4, which the company claimed was the “smartest AI in the world,” though leaderboard scores showed it trailing behind models from Google and OpenAI. Additionally, Grok has faced criticism for controversial features including sexualized AI companions and episodes of generating antisemitic content. The timing also highlights the intensifying competition between major AI laboratories.
As AI faded into the sunset in the late 1980s, it allowed Neural Network researchers to get some much needed funding. Neural networks had been around since the 1960s, but were actively squelched by the AI researches. Starved of resources, not much was heard of neural nets until it became obvious that AI was not living up to the hype. Unlike computers – what original AI was based on – neural networks do not have a processor or a central place to store memory.
In 1997, IBMs Deep Blue made brief headlines when it beat Garry Kasparov at his own game in a series of chess matches. Deep Blue did not understand chess the same way a calculator does not understand math. In early 1666, 19-year-old Gottfried Leibniz wrote De Arte Combinatoria (On the Combinatorial Art), an extended version of his doctoral dissertation in philosophy. Influenced by the works of previous philosophers, including Ramon Llull, Leibniz proposed an alphabet of human thought.
The tech was primitive by today’s standards, but demonstrated that it could be done. The conference takes place the following year at the 269-acre estate of Dartmouth College. Unfortunately, their timeline turns out to be a bit too optimistic. “We think a significant advance can be made … if a carefully selected group of scientists work on it for a summer,” they write.
The results may not have shown AI to be capable of anything more than working exceptionally well at problems with clearly defined rules, it was still a massive leap forward for artificial intelligence as a field. What backpropagation does is to allow a neural network to adjust its hidden layers in the event that the output it comes up doesn’t match the one its creator is hoping for. In short, it means that creators can train their networks to perform better by correcting them when they make mistakes. When this is done, backprop modifies the different connections in the neural network to make sure it gets the answer right the next time it faces the same problem.
You no longer need a Gemini Advanced (or Google One AI Premium) subscription to use the aforementioned tools. Think that Google developed the world’s first self-driving car? Back in 1986, a Mercedes-Benz van kitted out with cameras and smart sensors by researchers at Germany’s Bundeswehr University was able to successfully drive on empty streets. One possibility comes from Microsoft in the form of Project Silica. Using glass as the medium and laser options for writing and reading the data, the result is a storage medium that can potentially last thousands of years without degradation. This could be an excellent archival system to capture a pre-AI historical record.
‘Civilians shouldn’t be risking their lives for food’: Red Cross official on Gaza aid attacks
These use cases show that the potential uses for AI in historical research are vast and varied. Yet, there are others who view AI as potentially leading to a fundamental change in human history and culture, possibly not for the better. This proves the dual-edged nature of AI, indeed any advanced technology, to have both positive and negative consequences. Attempts to read these documents have been unsuccessful until now, as the scrolls would disintegrate if handled. Will AI improve our understanding of human history, or will it be a tool that will end our history?