• AI Research Insights
  • Posts
  • AI News: ๐Ÿš€ Meet SwissBERT | Databricks Open-Sources Dolly | GPT4All | Meet xTuring | An open-source solution for cloning ChatGPT with a complete RLHF pipeline, Meet ColossalChat

AI News: ๐Ÿš€ Meet SwissBERT | Databricks Open-Sources Dolly | GPT4All | Meet xTuring | An open-source solution for cloning ChatGPT with a complete RLHF pipeline, Meet ColossalChat

This newsletter brings AI research news that is much more technical than most resources but still digestible and applicable

Meet SwissBERT: Researchers from the University of Zurich propose a multilingual language model for Switzerland. SwissBERT is a masked language model created specifically for processing Switzerland-related text. SwissBERT is a pre-trained model that is adapted to news articles written in the national languages of Switzerland -- German, French, Italian, and Romansh. The research team evaluated SwissBERT on natural language understanding tasks related to Switzerland and find that it tends to outperform previous models on these tasks, especially when processing contemporary news and/or Romansh Grischun. Since SwissBERT uses language adapters, it may be extended to Swiss German dialects in future work. The model and our open-source code are publicly released.

Meet OpenFlamingo:ย A framework for training and evaluating large multimodal models (LMMs) capable of processing images and text. OpenFlamingo is an open-source framework that aims to democratize access to state-of-the-art Large Multimodal Models (LMMs) by providing a system capable of handling various vision-language tasks. Developed as a reproduction of DeepMindโ€™s Flamingo model, OpenFlamingo offers a Python framework to train Flamingo-style LMMs, a large-scale multimodal dataset, an in-context learning evaluation benchmark, and the first version of OpenFlamingo-9B model based on LLaMA.

Efficient Fine-tuning of Language Models with Zero-init Attention, Meet LLaMA-Adapter: Researchers from China and UCLA release LLaMA-Adapter. With only 1.2M learnable parameters and 52K instruction data, LLaMA-Adapter turns a LLaMA into an instruction-following model within ONE hour, delivering high-quality responses. They adopted learnable adaption prompts and prepended them to the input text tokens at higher transformer layers. A zero-init attention mechanism with zero gating adaptively injects the new instructional cues into LLaMA, while effectively preserving its pre-trained knowledge. LLaMA-Adapter can be simply extended to multi-modal input, e.g., images, for image-conditioned LLaMA, which achieves superior reasoning capacity on ScienceQA, a recent multi-modal science question benchmark.

Databricks Open-Sources Dolly: A ChatGPT like Generative AI Model that is Easier and Faster to Train. Dolly is a low-cost large language model (LLM) that demonstrates surprisingly high levels of the instruction-following abilities seen in ChatGPT. This work indicates that anyone with access to high-quality training data and an out-of-date open-source large language model (LLM) can train it to perform like ChatGPT in under 30 minutes on a single machine. Dolly uses data from Alpaca to make minor adjustments to an existing, open-source 6 billion parameter model from EleutherAI to elicit instruction following capabilities such as brainstorming and text production.

Goldman Sachs says generative AI will replace 300 million jobs: According to research conducted by Goldman Sachs, the increased use of generative AI applications may lead to the automation of 300 million full-time jobs. However, this shift towards automation is also expected to create new job vacancies due to technological innovation. The report suggests that AI automation will affect two-thirds of all jobs to varying degrees and may even result in the replacement of 300 million full-time jobs. Despite this, the report also predicts that AI will ultimately contribute to a 7% increase in global GDP.

GPT4All:ย Researchers release the GPT4All. GPT4All is a 7B parameter language model fine-tuned from a curated set of 400k GPT-Turbo-3.5 assistant-style generation. Inspired by learnings from Alpaca, the research team carefully curated ~800k prompt-response samples to produce 430k high-quality assistant-style prompt/generation training pairs including code, dialogue, and stories. The research team releases 800k data samples for anyone to build upon and a model you can run on your laptop.

Meet xTuring: An Open-Source Tool That Allows You to Create Your Own Large Language Model (LLMs) With Only Three Lines of Code. xTuringโ€™s versatility as a single-GPU or multi-GPU training framework means that users can tailor their models to their specific hardware configurations. Memory-efficient fine-tuning techniques like LoRA are used by xTuring to speed up the learning process and cut down on hardware expenditures by as much as 90%. By decreasing the amount of memory needed for fine-tuning, LoRA facilitates more rapid and effective model training.

An open-source solution for cloning ChatGPT with a complete RLHF pipeline, Meet ColossalChat: Although LLMs such as ChatGPT can be accessed as a service, there is a need for a practical open-source alternative that includes a complete RLHF pipeline. Colossal-AI has developed ColossalChat, a new open-source solution based on the LLaMA model that closely resembles the original ChatGPT technical solution. With less than 10B parameters, ColossalChat can achieve bilingual proficiency in English and Chinese through RLHF finetuning, and it delivers comparable results to ChatGPT and GPT-3.5.

Can Artificial Intelligence Match Human Creativity? A New Study Compares The Generation Of Original Ideas Between Humans and Generative Artificial Intelligence Chatbots: In a recent research paper, some researchers have compared ideas that a human being has produced with those generated by generative Artificial Intelligence. The six generative AI chatbots that the researchers have used for the comparison are alpa.ai, Copy.ai, ChatGPT (versions 3 and 4), Studio.ai, and YouChat. To determine the similarities and differences between the creativity of AI-generated and human-generated ideas, both the quality and quantity of ideas have been independently evaluated. They have been accessed by both humans and an AI explicitly trained for this purpose.

Do You Know Marktechpost has a community of 1.5 Million+ AI Professionals and Engineers? For partnership, please feel to contact us through this form.