Artificial Intelligence | Machine Learning Anthropic | AI Safety & Research Company


Enterprise, Human Feedback, Reinforcement Learning, Code Generation, Interpretability San Francisco, California, United Kingdom

Anthropic

Artificial Intelligence | Machine Learning


Anthropic | AI Safety & Research Company

Anthropic

Human Feedback, Reinforcement Learning, Code Generation, Interpretability


San Francisco, California, United Kingdom

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Large, general systems of today can have significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to make progress on these issues. For now, we’re primarily focused on research towards these goals; down the road, we foresee many opportunities for our work to create value commercially and for public benefit. We are a small, highly collaborative group of researchers, engineers, policy experts, and operational leaders, with experience spanning a variety of disciplines. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.

Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability. The easiest way to understand our research directions is to read some of our papers. Our first AI alignment paper focused on simple baselines and investigations. Our second AI alignment paper explores how to train a general language assistant to be helpful, but without providing harmful advice or exhibiting bad behaviors. Our first interpretability paper explores a mathematical framework for trying to reverse engineer transformer, and language models. Our first societal impacts paper explores the technical traits of large generative models and the motivations and challenges people face in building and deploying them. Our second interpretability paper explores the hypothesis that induction heads (discovered in our first interpretability paper) are the mechanism driving in-context learning. 

 
 

   Total Funding: $704M

   Funding Stage: Series B

   Business Stage: Scaling Up

   Market: B2B

   Company Size: 26 to 50

   Founded: 2021

 
 

Date

Round

$ Raised

Investors

05/23/2023

Funded

$450M

Spark Capital, Google, Salesforce, Sound Ventures, Zoom

Date : 05/23/2023

Round: Funded

$ Raised: $450M

Investors: Spark Capital, Google, Salesforce, Sound Ventures, Zoom

03/09/2023

Funded

$300M

Spark Capital

Date : 03/09/2023

Round: Funded

$ Raised: $300M

Investors: Spark Capital

05/02/2022

Series B

$580M

Center for Emerging Risk Research (CERR)

Date : 05/02/2022

Round: Series B

$ Raised: $580M

Investors: Center for Emerging Risk Research (CERR)

 

For AI/ML Startup Founders

Get introduced to VC/PE/CVC investors

For Investors at VC/PE firms

Get introduced to AI/ML Startup founders or founders at Anthropic

 
 
 
Dario Amodei

Dario Amodei
CEO and Co-Founder

Jared Kaplan

Jared Kaplan
Co-Founder

Tom Brown

Tom Brown
Co-Founder

Jack Clark

Jack Clark
Co-Founder

Daniela Amodei

Daniela Amodei
President

 
 

Anthropic is growing. Want to work at Anthropic? Anthropic is hiring. Join team at Anthropic

 

View All

Executive Personal Assistant to the CEO

San Francisco, California

Executive Personal Assistant

San Francisco, California

Deployment lead

San Francisco, California

Data engineer

San Francisco, California

Business Operations

San Francisco, California

 
 
appengine.ai

World's Most Promising AI/ML Startups