Victoria Krakovna shares her impressions of OpenAI's recent unconference.

Last weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with s…
vkrakovna.wordpress.com

Our AMA went great! Our full answers: http://effective-altruism.com/ea/12r/ask_miri_anything_ama/.

Meanwhile — we're 30% of the way to our funding target, with only two weeks remaining! Your support right now can make a bigger-than-usual difference for our AI safety research program.

The Machine Intelligence Research Institute is running its annual fundraiser, and we're using the opportunity to explain why we think MIRI's work is useful from
effective-altruism.com

We had a great time discussing open problems in AI safety at OpenAI's unconference this past weekend!

Our first group learning experiment! Last week we hosted over a hundred and fifty AI practitioners in our offices for our first self-organizing conference on machine learning. The goal was to accelerate AI research by bringing a diverse group of people together and making it easy for them to educate...
openai.com

Two announcements: we're answering questions on the EA Forum, and we have a new talk out introducing logical induction.

Nate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you’ve been curious about anything related to our research, plans, or general thoughts, you’re invited to submit your own questions in the comments below or at Ask MIRI Anything. We’ve also post...
intelligence.org

Have a burning question about MIRI? We're taking questions on the EA Forum!

Hi, all! The Machine Intelligence Research Institute (MIRI) is answering questions here tomorrow, October 12 at 10am PDT. You can post questions below in the in
effective-altruism.com

Big news this month: our largest grant to date, our most ambitious fundraiser, and a promising new result in logical uncertainty.

Our big announcement this month is our paper “Logical Induction,” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction. MIRI’s 2016 fundraiser is also live, and runs through the end of October...
intelligence.org

Want to support MIRI's research? Our 2016 fundraiser is in full swing, and we've written up our case for MIRI's research focus, with new details on how our methods differ from other candidate approaches.

The Machine Intelligence Research Institute is running its annual fundraiser, and we're using the opportunity to explain why we think MIRI's work is useful from
effective-altruism.com

A full archive of videos from MIRI and the Future of Humanity Institute's colloquium series.

We’ve uploaded the final set of videos from our recent Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Future of Humanity Institute. A full list of CSRBAI talks with public video or slides: Stuart Russell (UC Berkeley) — AI: The Story So Far (slides) A...
intelligence.org
Elon Musk notably sounded the alarm about potentially catastophic artificial intelligence. As conc...
insidephilanthropy.com
Presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford's Future...
youtube.com

The Centre for the Study of Existential Risk is seeking papers in three areas (AI, climate/environmental risks, and bioengineering) for its very first conference!

Cambridge Conference on Catastrophic Risk 2016 – Call For Papers If you're new here, you may want to subscribe to our Newsletter. Thank you for visiting!Cambridge Conference on Catastrophic Risk 2016: Managing Emerging Risks – Where Next? 12-14 December 2016 The past five years have seen rapid growt...
cser.org
The Partnership on Artificial Intelligence to Benefit People and Society (or simply Partnership on AI) will carry out research and recommend best practices.
businessinsider.com

Our revamped research page:

We focus our research on AI approaches that can be made transparent, so that humans can understand why the AIs behave as they do.
intelligence.org

From CSRBAI: a talk on AIXI's strengths and weaknesses, and using reflective oracles to define correct behavior in multi-agent settings.

Presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford's Future...
youtube.com

"In general, what criteria might we use to judge an assignment of probabilities to mathematical statements as reasonable or unreasonable?" A discussion of MIRI's new theoretical model of inductive reasoning.

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only su...
golem.ph.utexas.edu

An introduction to our new theoretical result, "logical induction": a highly general method for assigning reasonable probabilities to conjectures in mathematics and computer science.

Andrew Critch, a research fellow at the Machine Intelligence Research Institute, describes a new model of deductively limited reasoning developed by Scott Ga...
youtube.com

Our 2016 fundraiser is live! Learn more about our plans and about some key new developments in the field.

Our 2016 fundraiser is underway! Unlike in past years, we’ll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live): Donate Now Employer matching and pledges to give later this year also count towards the total. Click here to learn more. MIRI is...…
intelligence.org

Announcing a new framework for probabilistic reasoning under deductive limitations. We propose “a financial solution to the computer science problem of metamathematics": an algorithm that assigns reasonable probabilities to mathematical conjectures in a way that outpaces deduction, explained by analogy to inexploitable stock markets.

MIRI is releasing a paper introducing a new model of deductively limited reasoning: “Logical induction,” authored by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, myself, and Jessica Taylor. Readers may wish to start with the abridged version. Consider a setting where a reasoner is observing…
intelligence.org