- AI Research Insights
- Posts
- 🚨 AI News: LLaMA; ChatLLaMa (the First Open- Source Implementation of LLaMA by Nebuly AI); Stanford Human Preferences (SHP) Dataset....
🚨 AI News: LLaMA; ChatLLaMa (the First Open- Source Implementation of LLaMA by Nebuly AI); Stanford Human Preferences (SHP) Dataset....
This newsletter brings AI research news that is much more technical than most resources but still digestible and applicable
Hi there, today we will share some research updates from the Introduction of LLaMA to ChatLLaMa (the First Open- Source Implementation of LLaMA by Nebuly AI), Stanford Human Preferences (SHP) Dataset, LLMs as Engines of Text Evolution, Improving LLMs with Fact-Feedback and many other cool updates. So, let's start...
LLaMA: A new open-source, high-performance large language model from Meta AI - FAIR. LLaMA is actually a collection of foundation language models ranging from 7B to 65B parameters. The models have been trained on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla70B and PaLM-540B.
Stanford AI Releases Stanford Human Preferences (SHP) Dataset: A new Stanford research released Stanford Human Preferences (SHP), a dataset containing the aggregate preferences of 385,000 individuals for answers to queries and instructions over 18 distinct categories, ranging from cuisine to legal assistance on Reddit. SHP preferences represent the usefulness of one response over another given a certain context and two alternative responses. Each scenario consists of a question/instruction posted on Reddit and two top-level comments, of which one is more popular than the other (collectively). The SHP algorithm takes advantage of the fact that a comment is favored more if it has a better score, even though it was written later. As A’s higher score could have been the effect of more visibility, we cannot draw this conclusion unless A was written before B.
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?: To answer this question, a research team from Google and Georgia Tech presents InfoSeek. InfoSeek is a Visual Question Answering dataset that focuses on asking information-seeking questions, where the information can not be answered by common sense knowledge. They performed a multi-stage human annotation to collect a natural distribution of high-quality visual information-seeking question-answer pairs. They also constructed a large-scale, automatically collected dataset by combining existing visual entity recognition datasets and Wikidata, which provides over one million examples for model fine-tuning and validation. Based on InfoSeek, the researchers analyzed various pre-trained Visual QA systems to gain insights into the characteristics of different pre-trained models.
LLMs as Engines of Text Evolution: The research paper proposes a method called "language model crossover" which utilizes the in-context learning ability of large-scale language models to generate variations similar to evolutionary crossover. The method prompts the language model with a few text-based genotypes and parses its corresponding output as their offspring. This method is versatile and can be used to evolve various types of text representations such as binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The research concludes that language model crossover is a promising method for evolving genomes represented as text.
🚨 Meet ChatLLaMA: The First Open-Source Implementation of LLaMA Based on Reinforcement Learning from Human Feedback (RLHF): Meta has recently released LLaMA, a collection of foundational large language models ranging from 7 to 65 billion parameters. LLaMA is creating a lot of excitement because it is smaller than GPT-3 but has better performance. For example, LLaMA’s 13B architecture outperforms GPT-3 despite being 10 times smaller. This new collection of fundamental models opens the door to faster inference performance and chatGPT-like real-time assistants while being cost-effective and running on a single GPU. However, LLaMA was not fine-tuned for instruction tasks with a Reinforcement Learning from Human Feedback (RLHF) training process. The good news is that today Nebuly has introduced ChatLLaMA, the first open-source implementation of LLaMA based on RLHF.
EPFL developed a robot bird (“ornithopter”) that flaps wings and lands on branch: Ornithopter is an engineering feat. It needs to slow down significantly as it perches, while still maintaining flight. The leg-claw appendage must be carefully calibrated to compensate for the up-and-down oscillations of flight as it attempts to hone in.
Improving LLMs with Fact-Feedback: This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules. Their system makes the LLM generate responses grounded in consolidated external knowledge, e.g., stored in task-specific databases. It also iteratively revises LLM prompts to improve model responses using feedback generated by utility functions, e.g., the factuality score of a LLM-generated response. The effectiveness of LLM-Augmenter is empirically validated on two types of mission-critical scenarios, task-oriented dialog and open-domain question answering. LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses.
Do You Know Marktechpost has a Community of 1.5 Million AI Professionals and Developers?