Artificial Intelligence | Machine Learning Anthropic | AI Safety & Research Company


Enterprise, Human Feedback, Reinforcement Learning, Code Generation, Interpretability San Francisco, California, United States

Anthropic

Artificial Intelligence | Machine Learning


Anthropic | AI Safety & Research Company

Anthropic

Human Feedback, Reinforcement Learning, Code Generation, Interpretability


San Francisco, California, United States

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Large, general systems of today can have significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to make progress on these issues. For now, we’re primarily focused on research towards these goals; down the road, we foresee many opportunities for our work to create value commercially and for public benefit. We are a small, highly collaborative group of researchers, engineers, policy experts, and operational leaders, with experience spanning a variety of disciplines. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.

Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability. The easiest way to understand our research directions is to read some of our papers. Our first AI alignment paper focused on simple baselines and investigations. Our second AI alignment paper explores how to train a general language assistant to be helpful, but without providing harmful advice or exhibiting bad behaviors. Our first interpretability paper explores a mathematical framework for trying to reverse engineer transformer, and language models. Our first societal impacts paper explores the technical traits of large generative models and the motivations and challenges people face in building and deploying them. Our second interpretability paper explores the hypothesis that induction heads (discovered in our first interpretability paper) are the mechanism driving in-context learning. 

 

B2B

26 to 50

Series B

$704M

Scaling Up

2021

 
 

Research & Consulting Services

Explores the Technical Traits
Language Assistant as a Laboratory for Alignment

Increase Efficiency

 
 

Analytics
Service

Yes

Active

 
 

   Machine Learning


Machine Learning System

Machine Learning System


Text

Text

Structured

Structured

 

   Software


Python

Python

Pytorch

Pytorch

HTML

HTML

Kubernetes

Kubernetes

Spark

Spark


Machine Learning Algorithm

Machine Learning Algorithm

Deep Learning Algorithm

Deep Learning Algorithm

 
 

View All

Executive Personal Assistant to the CEO

San Francisco, California

Executive Personal Assistant

San Francisco, California

Deployment lead

San Francisco, California

Data engineer

San Francisco, California

Business Operations

San Francisco, California

 

AI/ML Professionals

Want to work at Anthropic?

We can introduce you to the right person at Anthropic

Talk to our Talent Team

 

1

1

$580M

Company was founded 2021 and it took almost 1 year (May 2022) to raise first external round

 
 

Date

Round

$ Raised

Investors

05/02/2022

Series B

$580M

Center for Emerging Risk Research (CERR)

Date : 05/02/2022

Round: Series B

$ Raised: $580M

Investors: Center for Emerging Risk Research (CERR)

 

Investors

Interested in researching Anthropic?

We have plenty of data and we can help. Our deal sourcing platform can help you perform more research about Anthropic

Get Started - Deal Souring Platform

 
Dario Amodei

Dario Amodei
CEO and Co-Founder

Jared Kaplan

Jared Kaplan
Co-Founder

Tom Brown

Tom Brown
Co-Founder

Jack Clark

Jack Clark
Co-Founder

Daniela Amodei

Daniela Amodei
President

 
 

Potential Customers

Interested in what they do or partnership?

Learn more about how they work

Schedule a Call w/ Anthropic