• AI Research Insights
  • Posts
  • Marktechpost AI Newsletter: Aloe-A Family of Fine-tuned Open Healthcare LLMs + Google DeepMind Introduces AlphaFold 3 + IBM's Open-Source Family of Granite Code Models and many more...

Marktechpost AI Newsletter: Aloe-A Family of Fine-tuned Open Healthcare LLMs + Google DeepMind Introduces AlphaFold 3 + IBM's Open-Source Family of Granite Code Models and many more...

Marktechpost AI Newsletter: Aloe-A Family of Fine-tuned Open Healthcare LLMs + Google DeepMind Introduces AlphaFold 3 + IBM's Open-Source Family of Granite Code Models and many more...

Want to get in front of 1.5 Million AI enthusiasts? Work with us here

Featured Research..

Aloe: A Family of Fine-tuned Open Healthcare LLMs that Achieves State-of-the-Art Results through Model Merging and Prompting Strategies

Researchers from the Barcelona Supercomputing Center (BSC) and Universitat Politècnica de Catalunya – Barcelona Tech (UPC) have developed the Aloe models, a new series of healthcare LLMs. These models employ innovative strategies such as model merging and instruct tuning, leveraging the best features of existing models and enhancing them through sophisticated training regimens on both public and proprietary synthesized datasets. The Aloe models are trained using a novel dataset that includes a mixture of public data sources and synthetic data generated through advanced Chain of Thought (CoT) techniques.

The technological backbone of the Aloe models involves integrating various new data processing and training strategies. For instance, they use an alignment phase with Direct Preference Optimization (DPO) to align the models ethically, and their performance is tested against numerous bias and toxicity metrics. The models also undergo a rigorous red teaming process to assess potential risks and ensure their safety in deployment.

 Editor’s Picks…

Google DeepMind Introduces AlphaFold 3: A Revolutionary AI Model that can Predict the Structure and Interactions of All Life’s Molecules with Unprecedented Accuracy

AlphaFold 3 is a state-of-the-art tool from the Google DeepMind and Isomorphic Labs teams in computational biology for predicting the structure and interactions of all life’s molecules. This model employs a revolutionary diffusion-based architecture to enhance predictions’ accuracy significantly beyond existing tools’ capabilities. AlphaFold 3’s methodological advancements allow for comprehensive and precise modeling of biomolecular interactions that were previously unattainable with older computational techniques.

The model utilizes a novel approach by integrating a direct diffusion process that predicts raw atom coordinates, bypassing previous models’ limitations requiring detailed, often unavailable, experimental data. This approach has led to remarkable accuracy improvements in predicting the structure of protein complexes and interactions with small molecules and nucleic acids. For example, AlphaFold 3 achieves an interface accuracy of over 90% across various molecular interactions, substantially improving over traditional docking tools and other predictive models.

IBM AI Team Releases an Open-Source Family of Granite Code Models for Making Coding Easier for Software Developers

IBM has made a great advancement in the field of software development by releasing a set of open-source Granite code models designed to make coding easier for people everywhere. IBM has made four Granite code model versions, with parameter counts ranging from 3 to 34 billion, publicly available. These models are designed specifically for a variety of coding jobs, such as memory-constrained applications and application modernization. They have undergone a thorough evaluation process to guarantee that they satisfy the highest requirements of performance and adaptability in a variety of coding tasks, including generation, debugging, and explanation.

Enhancing Continual Learning with IMEX-Reg: A Robust Approach to Mitigate Catastrophic Forgetting

Researchers from Eindhoven University of Technology and Wayve introduced a novel framework called IMEX-Reg, which stands for Implicit-Explicit Regularization. This approach combines contrastive representation learning (CRL) with consistency regularization to foster more robust generalization. The method emphasizes preserving past data and ensuring the learning process inherently discourages forgetting by enhancing the model’s ability to generalize across tasks and conditions.

IMEX-Reg operates on two levels: it employs CRL to encourage the model to identify and emphasize useful features across different data presentations, effectively using positive and negative pairings to refine its predictions. Consistent regularization helps align the classifier’s outputs more closely with real-world data distributions, thus maintaining accuracy even when trained data is limited. This dual approach significantly enhances the model’s stability and ability to adapt without forgetting crucial information.

Anthropic AI Launches a Prompt Engineering Tool that Lets You Generate Production-Ready Prompts in the Anthropic Console

Prompt engineering has been gaining increasing attraction recently because people want to navigate AI more efficiently and get optimal outputs. But not everyone can be a prompt engineer or doesn’t have the time to learn it all; luckily for them, Anthropic, the creator company behind Claude large language model (LLM) and one of the biggest competitors of ChatGPT, has just announced a new prompt engineering tool that can turn your ideas into effective, precise and reliable prompts using Claude’s prompt engineering techniques.