WEEKLY AI NEWS: RESEARCH, NEWS, RESOURCES, AND PERSPECTIVES
AI & ML news: Week 19 — 25 August
Google’s upgraded AI image generator is now available, Waymo is developing a roomier robotaxi with less-expensive tech, Authors sue Anthropic for copyright infringement over AI training and much more
The most interesting news, repository, articles, and resources of the week
Check and star this repository where the news will be collected and indexed:
You will find the news first in GitHub. Single posts are also collected here:
Research
- The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. a novel artificial intelligence (AI) agent that, for less than $15, can develop and write a full conference-level scientific paper; it automates scientific discovery by empowering frontier LLMs to conduct independent research and summarize findings; it also uses an automated reviewer to assess the papers it generates; it claims to achieve near-human performance in assessing paper scores; and it claims to generate papers that, according to their automated reviewer, surpass the acceptance threshold at a premier machine learning conference.
- LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs. suggests AgentWrite as a way to allow off-the-shelf LLMs to produce coherent outputs longer than 20K words. AgentWrite divides the long generation task into smaller tasks and uses a divide-and-conquer strategy to produce the outputs; the agent then splits the task into smaller writing subtasks and concatenates the outputs to produce a final output (i.e., plan + write). This method is then used to create SFT datasets, which are used to tune LLMs to produce coherent longer outputs automatically; a 9B parameter model, further enhanced through DPO, achieves state-of-the-art performance on their benchmark and outperforms proprietary models.
- EfficientRAG: Efficient Retriever for Multi-Hop Question Answering. trains a filter model to formulate the next-hop query based on the original question and previous annotations; this is done iteratively until all chunks are tagged as or the maximum # of iterations is reached; after the above process has gathered enough information to answer the initial question, the final generator (an LLM) generates the final answer. trains an auto-encoder LM to label and tag chunks; it retrieves relevant chunks, tags them as either or , and annotates chunks for continuous processing.
- RAGChecker: A Fine-grained Framework for Diagnosing Retrieval-Augmented Generation. a detailed assessment methodology for RAG retrieval and generating module diagnosis; demonstrates that RAGChecker exhibits superior correlations with human judgment; presents multiple illuminating patterns and trade-offs in RAG architecture design decisions.
- HybridRAG: Integrating Knowledge Graphs and Vector Retrieval Augmented Generation for Efficient Information Extraction. integrates VectorRAG and GraphRAG to create a HybridRAG system that performs better than either one separately; it was tested on a set of transcripts from financial earning calls. When the benefits of both methods are combined, questions can be answered with more accuracy.
- Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers. introduces self-play mutual reasoning to enhance small language models’ reasoning powers without the need for better models or fine-tuning; To create richer reasoning trajectories, MCTS is enhanced with human-like reasoning actions derived from SLMs; The target SLM chooses the last reasoning trajectory as the solution, while another SLM offers unsupervised input on the trajectories; For LLaMA2–7B, rStar increases GSM8K accuracy from 12.51% to 63.91% while steadily raising other SLM accuracy.
- Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters. explores how inference-time computation in LLMs scales. Specifically, it examines how much an LLM can be improved given a fixed amount of inference-time compute; it discovers that the efficacy of various scaling strategies varies by prompt difficulty; it then suggests an adaptive compute-optimal strategy that can increase efficiency by more than 4x when compared to a best-of-N baseline; it reports that optimally scaling test-time compute can outperform a 14x larger model in a FLOPs-matched evaluation.
- Medical Graph RAG: Towards Safe Medical Large Language Model via Graph Retrieval-Augmented Generation. a graph-based framework for the medical domain that improves LLMs and produces evidence-based results; makes use of chunk documents and a hybrid static-semantic approach to enhance context capture; uses graphs to represent entities and medical knowledge, creating an interconnected global graph; This method outperforms cutting-edge models and increases precision across several medical Q&A metrics.
- BAM dense to MoE Upcycling. By using this technique, the FFN and Attention layers of dense models can be recycled into a Mixture of Experts (MoE) model for additional training. This preserves downstream performance while saving a significant amount of computing expense.
- BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning. Backdoor attacks can be incorporated into medical foundation models using the BAPLe technique during the prompt learning stage.
- ShortCircuit: AlphaZero-Driven Circuit Design. AI-powered automation and optimization of chip design can lower costs while satisfying the need for more powerful chips. Using an Alpha Zero-based approach, this method was tested on numerous circuits and produced small and effective designs with an 84.6% success rate.
- Automated Design of Agentic Systems. This study examines the fragility of current agent systems and explores potential future directions for the design of learning systems. Programming languages are used by their creators as a testbed where unsupervised agent creation and execution are possible.
- Loss of plasticity in deep continual learning. The pervasive problem of artificial neural networks losing plasticity in continual-learning settings is demonstrated and a simple solution called the continual backpropagation algorithm is described to prevent this issue.
- Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model. Incredible new model from Meta that performs diffusion and next token prediction on text and image interleaving. It performs comparably to earlier generation devices like Dalle 2 and Llama 2 in benchmark tests for text and graphics.
- To Code, or Not To Code? Exploring Impact of Code in Pre-training. The industry keeps this to itself, although pretraining models on code aid in their generalization to other reasoning-intensive activities. This Cohere study investigates that issue in detail and demonstrates that code may be used as a foundational element of thinking in a variety of contexts.
News
- AI-generated parody song about immigrants storms into German Top 50.Artist Butterbro was accused of walking a fine line between parody and discrimination and helping make racial slurs mainstream
- Tesla faces the lowest duty on Chinese-made cars exported to EU. The 9% tariff is much less than others face after an investigation into Beijing’s ‘unfair’ subsidies of EVs
- Google’s upgraded AI image generator is now available. Google says Imagen 3 is its highest-quality image generator so far — and now more users in the US can try it.
- Runway’s Gen-3 Alpha Turbo is here and can make AI videos faster than you can type. The new Gen-3 Alpha Turbo from Runway ML is currently available with a variety of subscription plans, including free trials, and offers 7x quicker AI video creation at half the cost of its predecessor. The time lag is greatly decreased by this speed increase, which promotes more productive workflows, especially in industries where time is of the essence. Runway is negotiating the ethical waters of AI training data practices while pushing for more advancements, such as improved control systems.
- Eric Schmidt Walks Back Claim Google Is Behind on AI Because of Remote Work. Eric Schmidt, ex-CEO and executive chairman at Google, walked back remarks in which he said his former company was losing the artificial intelligence race because of its remote-work policies.
- Gemini Advanced updated with latest 1.5 Pro model for improved reasoning. Google has enhanced Gemini 1.5 Pro in Gemini Advanced, delivering improved responses for prompts requiring advanced reasoning and coding.
- Waymo is developing a roomier robotaxi with less-expensive tech Waymo has revealed its Generation 6 self-driving technology that is built into Geely Zeekr EVs and requires fewer cameras and sensors. With the help of machine intelligence and semiconductor developments, the Alphabet division intends to quickly implement this technology to survive a variety of weather conditions. With this update, Waymo is able to continue scaling its Waymo One service, which is presently offering 50,000 trips each week.
- Gemini Live could use some more rehearsals. Google’s AI-powered voice interaction technology, Gemini Live, attempts to replicate genuine speech but has trouble with errors and hallucinations. It isn’t as customizable or expressive as rivals like OpenAI’s Advanced Voice Mode, even though it uses professional actors for more expressive voices. Overall, the bot’s usefulness and purpose are unclear due to its limited capability and dependability concerns, especially considering that it is a component of Google’s expensive AI Premium Plan.
- Hamming Launches 100x faster testing of voice agents. With the use of a technology called hamming, you may test hundreds of situations for your voice AI systems and create personalities that resemble Character AI.
- Fine-tuning now available for GPT-4o. With the announcement of fine-tuning for GPT-4o, OpenAI enables developers to tailor the model using their datasets for certain use cases. Through September 23, it will be giving away one million free training tokens per day.
- OpenAI strikes search deal with Condé Nast. With the signing of a multi-year licensing deal, OpenAI and Condé Nast can integrate content from the publisher’s brands, like Vogue and The New Yorker, into their ChatGPT and SearchGPT platforms.
- Meta’s Self-Taught Evaluator enables LLMs to create their own training data. Meta FAIR researchers have introduced the Self-Taught Evaluator, a method to train evaluative LLMs without human annotations, potentially enhancing the efficiency and scalability of LLM assessment. Using the LLM-as-a-Judge concept, it iteratively generates and refines responses to create a training dataset, demonstrating improved performance on benchmarks like RewardBench. This technique could enable enterprises to leverage unlabeled data for LLM tuning while acknowledging the importance of a well-aligned seed model and the limitations of benchmarks.
- Video: $16,000 humanoid robot ready to leap into mass production. China’s Unitree Robotics is a relatively recent entry in the general-purpose humanoid robot space, but its $16,000 G1 model is already proving itself to be quite the performer. So much so that the company has now revealed a version that’s ready for mass production.
- US mayoral candidate who pledged to govern by customized AI bot loses race. Victor Miller proposed a customized ChatGPT bot to govern Cheyenne, Wyoming — but fared badly at the ballot boxAuthors sue
- Anthropic for copyright infringement over AI training. Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson allege the company misused work to teach chatbot Claude
- Ideogram 2.0. A new model from Ideogram has better text rendering and image-generating capabilities.
- Introducing Zed AI. With the help of a hosted service called Zed AI, developers may employ LLMs and yet have complete control over their code by integrating AI-powered coding into the Zed text editor. Zed and Anthropic have teamed up to enable quick editing with Claude.
- Nvidia’s AI NPCs will debut in a multiplayer mech battle game next year. Nvidia ACE, the company’s AI-powered system for giving voices and conversation skills to in-game characters, is set to debut in Mecha Break, a new multiplayer mech battle game coming to PC, Xbox X / S, and PlayStation 5 in 2025.
- These ‘living computers’ are made from human neurons — and you can rent one for $500 a month. Using human-brain organoids in computing, FinalSpark’s “Neuroplatform” provides a biocomputing platform that may be rented to lower AI’s energy consumption. Standardizing production and increasing the life of organoids beyond 100 days are challenges. Alternatives such as fungal networks and cellular computing are also investigated for jobs that are beyond the capabilities of silicon-based computers.
- AI made of jelly ‘learns’ to play Pong — and improves with practice. Inspired by neurons in a dish playing the classic video game, researchers show that synthetic hydrogels have a basic ‘memory’.
- Cursor raises $60m. Cursor raised a Series A to continue building its AI-powered coding IDE.
- Perplexity AI plans to start running ads in the fourth quarter as AI-assisted search gains popularity. The AI-assisted search startup Perplexity AI, which just raised $1 billion in funding, intends to launch adverts on its search app in Q4.
- Pixel 9 phones: The Gemini AI stuff, reviewed. One of the main features of the Pixel 9 phones is Google’s Gemini AI, which provides customers with several AI-powered features like task assistance, picture editing, and screenshot management. Its effectiveness as a full-fledged assistant is uneven, though, with sporadic hiccups and several Google Assistant functions that aren’t completely incorporated. Notwithstanding these problems, Pixel users can benefit from intriguing features like document summarizing and creative photo “reimagining” tools.
- AMD explains its AI PC strategy. With its Ryzen AI 300 CPUs, AMD is pushing the AI PC industry forward by incorporating NPUs to improve AI-powered applications such as Microsoft’s Recall.
- Gemini in Gmail can now help polish up your drafts. ‘Help me write’ can now polish your emails, in addition to being able to formalize them or shorten them.
- Royal Society facing calls to expel Elon Musk amid concerns about conduct. Some fellows fear tech billionaires could bring the institution into disrepute with incendiary comments
- Apple Intelligence is coming. Here’s what it means for your iPhone. Apple is about to launch a ChatGPT-powered version of Siri as part of a suite of AI features in iOS 18. Will this change the way you use your phone — and how does it affect your privacy?
https://levelup.gitconnected.com/can-ai-replace-human-researchers-50fcc43ea587
Resources
- A Survey of NL2SQL with Large Language Models: Where are we, and where are we going? a thorough rundown of NL2SQL approaches driven by LLMs, including models, data gathering, assessment strategies, and error analysis
- DeepSeek-Prover-V1.5. Process supervision was used to train DeepSeek’s extremely potent math model, which performs noticeably better than larger models on several MATH benchmarks.
- DifuzCam: Replacing Camera Lens with a Mask and a Diffusion Model. This is a fun project that reconstructs very low-quality images from a cheap camera using a diffusion model.
- Knowledge Fusion of Large Language Models. Several models can be combined with Fuse Chat, allowing each to contribute their unique capabilities. This is the code base containing the model weights for several robust 7B models that achieve good results on the MT bench.
- SigmaRL. The goal of the decentralized, open-source SigmaRL framework is to enhance the generalization and sample efficiency of multi-agent Reinforcement Learning (RL) in the context of motion planning for automated and networked vehicles.
- Comparative Evaluation of 3D Reconstruction Methods for Object Pose Estimation. To evaluate how the quality of 3D reconstructions affects object position estimate accuracy in industrial applications, this work presents a thorough benchmark.
- MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing. The process of producing many views from a single image is known as multi-view image synthesis.
- BLIP-3. For a while, BLIP was the most used multimodal model. The most recent iteration employs a pure autoregressive loss and is noticeably simpler. It attains cutting-edge results on certain captioning benchmarks.
- SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation. A new image segmentation framework called SAM2-UNet uses the potent Segment Anything Model 2 (SAM2) as its encoder.
- A Survey on Benchmarks of Multimodal Large Language Models. A thorough analysis of 180 benchmarks for Multimodal Large Language Model evaluation is presented in this work.
- SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering. You can create an editable and animatable mesh output from a video or image series using mesh reconstruction from Gaussian splatting. It just takes a few steps on a single GPU to accomplish this, and it does so very rapidly and efficiently.
- Llama-3.1 Storm Models. These are the first tuned models that significantly outperform Meta’s Llama-3.1 base models.
- EasyRec: Simple yet Effective Language Model for Recommendation. EasyRec is a language paradigm created especially for jobs involving recommendations. To produce high-quality semantic embeddings, it makes use of cooperative data from several datasets and creative contrastive learning objectives.
- Classifying all of the pdfs on the internet. A wonderful post about classifying every PDF available on the internet according to its semantic content using clever prompting and embeddings.
- How to get from high school math to cutting-edge ML/AI: a detailed 4-stage roadmap with links to the best learning resources that I’m aware of. Software experts can use the following four-step learning plan to comprehend advanced ML/AI papers: Basic math (calculus, algebra, linear algebra, probability, statistics), deep learning (multi-layer neural networks), classical machine learning (basic regression, classification models), and cutting-edge machine learning (transformers, LLMs, diffusion models) are the first four areas of study in machine learning. For stages 1–2, author-created content is essential, while for stages 3–4, suggested outside items are necessary. Once each level is mastered, students are better prepared to take on challenging ML papers and keep up with the rapidly advancing field of AI research.
- llamafile v0.8.13 Whisper models are now supported by Llama files, which also offer a number of speed and quality-of-life enhancements.
- MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model. A quick, affordable, and cutting-edge approach for creating 3D meshes that can be trained on text or images. In particular, it employs a cascade of steps, such as a normal map generator, that transfers distinct duties to different submodels and signed distance function supervision.
- NeuFlow_v2. Optical flow code that is incredibly quick and effective and suitable for low-power devices like phones and certain security camera systems.
- X-ray Report Generation. To produce X-ray medical reports more efficiently and with less computer complexity, a new framework was created.
- TraDiffusion:Trajectory-Based Training-Free Image Generation. A novel technique called TraDiffusion uses mouse trajectories rather than box or mask controls to guide text-to-image generation.
- Loss Rider. A fun utility that illustrates when loss functions converge and get too spiky by animating a curve rider sled as it descends them.
- kyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama. The goal of the large dataset SkyScript-100M is to improve the production of excellent shooting scripts for short dramas.
- NeuFlow v2: High-Efficiency Optical Flow Estimation on Edge Devices. This work presents a novel approach to optical flow estimation that delivers excellent accuracy at a large computational cost savings.
- Torch-Pruning. repository of cutting-edge techniques with numerous supported algorithms for language model pruning that is kept up to date.
- Image, Tell me your story! A novel strategy for identifying visual misrepresentation has been presented by researchers, which emphasizes the importance of the original meta-context of images — a factor that automated approaches frequently ignore.
- Pathology-LLaVA. Pathology image analysis is the target application for PA-LLaVA, a domain-specific language-vision assistant.
- Microsoft’s Phi-3 family. A detailed analysis of the MoE and vision model from Microsoft’s recently released Phi 3.5 models.
- The Top 100 Gen AI Consumer Apps — 3rd Edition. Based on customer interaction patterns, Andreessen Horowitz’s most recent consumer AI research ranks the top 100 generative AI apps and divides them into the top 50 AI online products and the top 50 AI mobile apps. The research offers in-depth analyses of trends, new competitors in the sector, and developing categories.
- Eight basic rules for causal inference. This comprehensive blog article explains the relationship between causal mechanisms and observable correlations using R code simulations, causal graphs, and logic concepts to illustrate the seven basic laws of causal inference.
- Jamba-1.5. AI21 has released new versions of its hybrid Transformer and State space model architecture.
- biorecap: an R package for summarizing bioRxiv preprints with a local LLM. The recently released biorecap R package uses locally run big language models to fetch and summarize recent publications, assisting academics in managing the massive amount of bioRxiv preprints.
- aurora. Microsoft’s high-quality atmospheric prediction model, code, and checkpoints are available as open source.
- NuSegDG. A novel framework named NuSegDG has been created by researchers to improve the generalizability of nuclei segmentation in various medical pictures.Pano2Room: Novel View Synthesis from a Single Indoor Panorama.Pano2Room is a novel technique that overcomes limitations in single-view 3D scene synthesis by reconstructing high-quality 3D indoor scenes from a single panoramic image.
- Awesome Object-Centric Robotic Manipulation. This repository offers a thorough introduction to embodied learning, a promising robotic manipulation methodology that prioritizes perceptual feedback and physical interaction.
Perspectives
- ‘Threads is just deathly dull’: have Twitter quitters found what they are looking for on other networks? There’s been an exodus of users from X, propelled by Elon Musk’s lurch to the far right, but the alternatives have drawbacks too
- Five ways the brain can age: 50,000 scans reveal possible patterns of damage. Results raise hopes that methods could be developed to detect the earliest stages of neurodegenerative disease.
- An AI Empire. As AI develops, mankind may surpass other species as the most intelligent on Earth. AGI may not be far off, as it might allow AI research to be replicated on a never-before-seen scale. The exponential rise in computing suggests that humans will soon become significantly less relevant as AI takes over. Despite possible roadblocks in AI development, society might not be prepared for such a significant transformation.
- What does Bitcoin smell like? AI startup wants to ‘teleport’ digital scents. A firm focused on artificial intelligence called Osmo is creating technology that will allow computers to recognize and replicate smells, which might help with disease detection and digital scent communication. Scent detection lacks a defined “smell map,” which makes it more difficult for the team to create a molecular bond scent database than audiovisual AI advancements. Osmo’s applications, which integrate olfactory sensations, have the potential to transform digital marketing and medical diagnostics.
- Eric Schmidt’s AI prophecy: The next two years will shock you. In the next years, former Google CEO Eric Schmidt believes that artificial intelligence will evolve quickly and might produce important apps similar to TikTok rivals in a matter of minutes. He draws attention to the unpredictable and rapid advancements in AI, noting the possibility of massive technological and economic disruption from the convergence of agent-based systems with text-to-action capabilities and big language models. Schmidt’s perspective indicates a revolutionary age ahead, reflecting the significant investments and energy requirements expected for cutting-edge AI development.
- Why Neuralink’s Blindsight and Brain Implants to restore sight won’t work like human eyesight. This study emphasizes the difficulties in using AI-powered cortical implants to restore vision by highlighting the fact that neurons in the visual cortex do not behave like pixels on a screen. Although high-resolution simulations are promising, cortical implants cannot achieve genuine vision since doing so would entail reproducing intricate neural patterns, which is far beyond the capabilities of present technology and will result in pixelated and subpar images.
- A Personalized Brain Pacemaker for Parkinson’s. Researchers have created an adaptive method of deep brain stimulation that greatly shortens the duration of symptoms by adjusting electrical pulses to the various symptoms experienced by Parkinson’s sufferers.
- Why Diffusion could help LLMs reason. Present-day language models anticipate words one at a time, leaving very little opportunity for reasoning and planning. This can be avoided by using techniques like Chain of Thought prompting. To enhance model reasoning, diffusion models — which have the capacity to spend more diffusion steps per token — might be used.
- AI companies are pivoting from creating gods to building products. Good. The preparedness of generative AI for broad commercial applications has been overstated by AI businesses, which has resulted in expensive errors in product development and market integration. They have five major obstacles to overcome to change direction: making sure that the system is affordable, boosting security and safety, protecting privacy, and optimizing user interfaces. These challenges draw attention to the discrepancy between the potential of AI and the actual difficulties in implementing AI systems that satisfy user expectations and fit in with current processes. Rather than occurring in the quick timeframe some have projected, the route to broad adoption will probably take ten years or longer.
- Has your paper been used to train an AI model? Almost certainly. Artificial intelligence developers are buying access to valuable data sets that contain research papers — raising uncomfortable questions about copyright.
- The testing of AI in medicine is a mess. Here’s how it should be done. Hundreds of medical algorithms have been approved on the basis of limited clinical data. Scientists are debating who should test these tools and how best to do it.
- Light bulbs have energy ratings — so why can’t AI chatbots? The rising energy and environmental cost of the artificial intelligence boom is fuelling concern. Green policy mechanisms that already exist offer a path towards a solution.
- How the human brain creates cognitive maps of related concepts. Neural activity in human brains rapidly restructures to reflect hidden relationships needed to adapt to a changing environment. Surprisingly, trial-and-error learning and verbal instruction induce similar changes.
- Switching between tasks can cause AI to lose the ability to learn. Artificial neural networks become incapable of mastering new skills when they learn them one after the other. Researchers have only scratched the surface of why this phenomenon occurs — and how it can be fixed.
- Markov chains are funnier than LLMs. This article explores LLM predictability and its limitations when it comes to producing humor. It makes the case that although LLMs are excellent at producing text that is appropriate for the context, their predictive nature renders them unsuitable for humorous writing, which depends on unexpectedness.
- AI at Work Is Here. Now Comes the Hard Part. In the last six months, the use of generative AI has almost doubled globally, with 75% of knowledge workers currently using it.
- AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work. This is a lengthy and comprehensive overview of the research that DeepMind is doing on AGI safety and alignment.
- The newest weapon against mosquitoes: computer vision. Developments in computer vision are helping combat malaria by enabling applications such as VectorCam, which facilitates fast identification of mosquito species and data gathering. The Gates Foundation helped develop the app, which can identify species that transmit malaria and aid in improving disease control tactics. Innovative mosquito surveillance techniques are essential for the tactical use of pesticides and other mitigating actions.
- Fields that I reference when thinking about AI takeover prevention. This article compares fields battling insider threats with AI control, offering ideas on developing and assessing strong AI safety measures. It emphasizes how much more control developers have over AIs than they do over people, but it also points out that, in contrast to humans, AI dishonesty can be endemic. AI control is different mainly because it is adversarial and doesn’t involve complicated system interactions, even though it is influenced by different domains such as physical security and safety engineering.
- ‘Never summon a power you can’t control: Yuval Noah Harari on how AI could threaten democracy and divide the world. Forget Hollywood depictions of gun-toting robots running wild in the streets — the reality of artificial intelligence is far more dangerous, warns the historian and author in an exclusive extract from his new book
Meme of the week
What do you think about it? Some news that captured your attention? Let me know in the comments
If you have found this interesting:
You can look for my other articles, and you can also connect or reach me on LinkedIn. Check this repository containing weekly updated ML & AI news. I am open to collaborations and projects and you can reach me on LinkedIn. You can also subscribe for free to get notified when I publish a new story.
Here is the link to my GitHub repository, where I am collecting code and many resources related to machine learning, artificial intelligence, and more.
or you may be interested in one of my recent articles: