WEEKLY AI NEWS: RESEARCH, NEWS, RESOURCES, AND PERSPECTIVES

AI & ML news: Week 2- 8 September

OpenAI new models could cost up to 2000$ per month, X goes off in Brazil, and much more

Salvatore Raieli
20 min readSep 9, 2024
Photo by Priscilla Du Preez 🇨🇦 on Unsplash

The most interesting news, repository, articles, and resources of the week

Check and star this repository where the news will be collected and indexed:

You will find the news first in GitHub. Single posts are also collected here:

Weekly AI and ML news - each week the best of the field

44 stories

Research

News

Resources

Perspectives

  • I learned the language of computer programming in my 50s — here’s what I discovered. A writer with no technical background recounts his incredible journey into the realm of coding and the invaluable lesson it taught him about the modern world
  • Why A.I. Isn’t Going to Make Art. To create a novel or a painting, an artist makes choices that are fundamentally alien to artificial intelligence.
  • Autonomous car bombs, online recruitment: Experts worry how AI can transform terrorism. Law enforcement has to anticipate novel AI uses and develop countermeasures
  • Researchers built an ‘AI Scientist’ — what can it do? The large language model does everything from reading the literature to writing and reviewing its own papers, but it has a limited range of applicability so far.
  • The Next Generation Pixar: How AI will Merge Film & Games. With its ability to combine dynamic gaming engagement with narrative depth, generative AI has the potential to completely transform storytelling. This change is being accelerated by recent developments in generative models, such as Luma AI’s Dream Machine and OpenAI’s Sora, which allow for the creation of interactive videos in real-time. This development, which combines AI, gaming, and film, could result in the next “Pixar” in interactive media.
  • China’s robot makers chase Tesla to deliver humanoid workers. At the World Robot Conference in Beijing, more than 25 Chinese businesses featured humanoid robots designed for factory automation. These companies were supported by significant government funding and took advantage of China’s extensive supply network. By 2035, the market for humanoid robots is expected to reach $38 billion globally. By 2025, China hopes to have these robots in large quantities, stepping up the battle with Tesla’s planned Optimus robot. Tesla expects to roll out 1,000 Optimus robots in its factories over the course of the next year, while Chinese companies are predicting substantial cost savings on their models.
  • Why AI can’t spell ‘strawberry’. Because of their tokenization techniques, large language models occasionally perform poorly on tasks like letter counting. This demonstrates how the LLM architecture has shortcomings that impact how well they comprehend text. Nevertheless, developments are still being made. For example, Google DeepMind’s AlphaGeometry 2 for formal math and OpenAI’s Strawberry for enhanced reasoning
  • Diffusion is spectral autoregression. It’s common knowledge that auto-regressive models and diffusion models are essentially distinct types of methodologies. When it comes to diffusion models that genuinely take auto-regressive steps in the frequency domain, they might, in fact, be more comparable than we previously realized.
  • Can AI Scaling Continue Through 2030? AI training is expanding at a rate that has never been seen before — four times faster than previous technology advances in genome sequencing and mobile use. According to research, the main limitations in scaling AI training could last until 2030 and are related to power availability and chip production capacity. If hundreds of billions are committed, training runs up to 2e29 FLOP would become feasible, representing significant advancement comparable to the transition from GPT-2 to GPT-4. Advanced network topologies and multimodal and synthetic data production methodologies might help overcome difficulties like data shortages and latency.
  • GPU Utilization is a Misleading Metric. Although frequently tracked, GPU utilization may not fully capture GPU performance in machine learning workloads since it does not take into consideration whether the GPU’s computational power is being utilized to its fullest. Trainy found this out when, during LLM training, 100% GPU usage was achieved, but only ~20% model FLOPS utilization (MFU) was achieved. It suggests using fused kernel optimization and the appropriate model parallelism level to obtain a 4x speedup in training time and tracking SM efficiency for a better performance indication.
  • AI-Implanted False Memories. In simulated criminal witness interviews, generative chatbots driven by massive language models greatly increased the generation of false memories, inducing roughly three times more instantaneous false recollections than a control group, according to a study by MIT Media Lab.
  • The biology of smell is a mystery — AI is helping to solve it. Scientists are beginning to crack the fiendishly complex code that helps us to sense odours.
  • How much is AI hurting the planet? Big tech won’t tell us. big tech companies, like Google, are not disclosing the full environmental impact of AI, while emissions from their operations have significantly increased, with Google’s greenhouse gas emissions rising by 48% between 2019 and 2023
  • AI Has Created a Battle Over Web Crawling. A research by the Data Provenance Initiative cautions that when websites restrict crawler bots more and more, high-quality data may become inaccessible to generative AI models. This trend, which is motivated by worries about data exploitation, may cause AI training to rely more on low-quality data rather than well-maintained sources. Businesses may use direct licensing or synthetic data to preserve the effectiveness of AI models in the face of increasing data scarcity.
  • What Succeeding at AI Safety Will Involve. Sam from Anthropic hazard a guess as to what will have to be done in order for AI safety to be successful while creating superhuman AI systems.
  • the art of programming and why i won’t use llm. Although LLMs are praised for increasing productivity and are being incorporated into coding workflows more and more, some contend that their programming effectiveness is overstated.
  • ‘He was in mystic delirium’: was this hermit mathematician a forgotten genius whose ideas could transform AI — or a lonely madman? In isolation, Alexander Grothendieck seemed to have lost touch with reality, but some say his metaphysical theories could contain wonders
  • AI Checkers Forcing Kids To Write Like A Robot To Avoid Being Called A Robot. Can the fear of students using generative AI and the rise of questionable AI “checker” tools create a culture devoid of creativity?
  • The AI Arms Race Isn’t Inevitable. Prominent AI labs are pushing Western governments to support swift AI developments in order to prevent rivals like China from gaining a decisive technological advantage. They are increasingly portraying AI research as a geopolitical zero-sum game crucial for national security. This story supports drastic steps to ensure AI domination, even at the expense of escalating geopolitical tensions and possibly jeopardizing safety and ethical standards.
  • Is AI eating all the energy? AI’s total energy footprint is influenced by both rising demand and rising energy efficiency. Power, heat, carbon, and water use are all positively connected with AI’s energy consumption. The general trend of AI processing becoming more power-hungry is being countered by hardware efficiency improvements. Although its influence is lessened by broad use, AI still accounts for a small but growing portion of data center power consumption, with training activities using a lot more energy than inference.
  • Debate over “open source AI” term brings new push to formalize definition. In an effort to clarify the meaning and address the term’s overuse, the Open Source Initiative (OSI) published a proposed definition of “open source AI” that includes usage rights, study, modification, and sharing freedoms. With this step, researchers and engineers will be able to assess AI systems in a more transparent manner. In October, a stable version of the definition is anticipated, which may have an impact on upcoming releases of AI models and regulations.
  • Predicting AI. This author considers their forecasts for AI and notes that they were correct to predict the growth of open source, multimodal models, and improved tool usability.
  • Bill Gates has a good feeling about AI. The Verge spoke with Bill Gates about AI, misinformation, and climate change.
  • Enterprise AI Infrastructure: Privacy, Maturity, Resources. An interesting interview with BentoML’s CEO discusses how to enhance business tooling, make sure you can expand, and avoid over-engineering it from the start.

Meme of the week

What do you think about it? Some news that captured your attention? Let me know in the comments

If you have found this interesting:

You can look for my other articles, and you can also connect or reach me on LinkedIn. Check this repository containing weekly updated ML & AI news. I am open to collaborations and projects and you can reach me on LinkedIn. You can also subscribe for free to get notified when I publish a new story.

Here is the link to my GitHub repository, where I am collecting code and many resources related to machine learning, artificial intelligence, and more.

or you may be interested in one of my recent articles:

--

--

Salvatore Raieli

Senior data scientist | about science, machine learning, and AI. Top writer in Artificial Intelligence