Robin Hanson discusses his book "The Age of Em" at 5pm today at the Oxford Martin School.

OCT19
Wed 5:00 PM UTC+01Oxford, United Kingdom
78 people interested · 28 people going

EAGxOxford is accepting applications. Apply here: http://eagxoxford.com/

NOV18
Nov 18 - Nov 20Examination Schools
228 people interested · 142 people going
OCT19
Wed 5:00 PM UTC+01Oxford, United Kingdom
78 people interested · 28 people going
OCT19
Wed 5:00 PM UTC+01Oxford, United Kingdom
78 people interested · 28 people going

Statement to the UN General Assembly First Committee on Disarmament and International Security

http://biosecu.re/…/4_Statement_to_the_UN_First_Committee.h…

In December this year the international treaty that bans biological weapons (BWC) meets for its 5 year high-level review. This provides a rare opportunity to send a clear message to the international community that there are concerns over the risks posed by these weapons.
biosecu.re

Neat app from the Future of Life Institute that breaks down alternative ways to spend the US nuclear weapons budget

Right now the U.S. government is about to launch a new nuclear weapons program that will…
futureoflife.org

The University of Cambridge Centre for the Study of Existential Risk (CSER) is hiring.

The University of Cambridge Centre for the Study of Existential Risk (CSER) is recruiting for an Academic Project Manager. This is an opportunity to play a shaping role as CSER builds on its first year's momentum towards becoming a permanent world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic a...nd project management responsibilities.

The Academic Project Manager will work with CSER's Executive Director and research team to co-ordinate and develop CSER's projects and overall profile, and to develop new research directions. The post-holder will also build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide, and will act as an ambassador for the Centre’s research externally. Research topics will include AI safety, bio risk, extreme environmental risk, future technological advances, and cross-cutting work on governance, philosophy and foresight. Candidates will have a PhD in a relevant subject, or have equivalent experience in a relevant setting (e.g. policy, industry, think tank, NGO).

Application deadline: November 11th. http://www.jobs.cam.ac.uk/job/11684/

See More
Research Associate (Fixed Term) in the Centre for Research in Arts, Social Sciences and Humanities at the University of Cambridge.
jobs.cam.ac.uk
Cambridge Conference on Catastrophic Risk 2016 – Call For Papers If you're new here, you may want to subscribe to our Newsletter. Thank you for visiting!Cambridge Conference on Catastrophic Risk 2016: Managing Emerging Risks – Where Next? 12-14 December 2016 The past five years have seen rapid growt...
cser.org

The Global Catastrophic Risks Institute is seeking a Media Engagement Intern: "The ideal candidate is a student or early-career professional seeking a career at the intersection of global catastrophic risk and the media."

The Global Catastrophic Risk Institute (GCRI) seeks a volunteer/intern to contribute on the topic of media engagement on global catastrophic risk, which is the risk of events that could harm or destroy global human civilization. The work would include two parts: (1) analysis of existing media covera...
gcrinstitute.org

FHI is Hiring!

FHI is accepting applications for a two-year position as a full-time Research Project Manager. Responsibilities will include coordinating, monitoring, and developing FHI’s activities, seeking funding, organizing workshops and conferences, and effectively communicating FHI’s research. The Research Program Manager will also be expected to work in collaboration with Professor Nick Bostrom, and other researchers, to advance their research agendas, and will addition...ally be expected to produce reports for government, industry, and other relevant organizations.

Applicants will be familiar with existing research and literature in the field and have excellent communication skills, including the ability to write for publication. He or she will have experience of independently managing a research project and of contributing to large policy-relevant reports. Previous professional experience working for non-profit organisations, experience with effectiv altruism, and a network in the relevant fields associated with existential risk may be an advantage, but are not essential.

To apply please go to https://www.recruit.ox.ac.uk and enter vacancy #124775 (it is also possible to find the job by searching choosing “Philosophy Faculty” from the department options). The deadline is noon UK time on 29 August. To stay up to date on job opportunities at the Future of Humanity Institute, please sign up for updates on our vacancies newsletter here.

See More

Starting at 18 min: An update on the landscape of AI Safety research. EA Global Panel with Open AI, Vicarious, FHI, and Open Philanthropy

Riva-Melissa Tez, Dario Amodei, Dileep George, Toby Ord, Daniel Dewey: How can we benefit from modern AI while avoiding the risks? EA GLOBAL
library.fora.tv

Effective Altruism Global at UC Berkeley, the conference for doing good effectively, is coming up. You can meet FHI staff there, join our workshops and connect to the many participants interested in improving the future of humanity.

Sign up at eaglobal.org, the deadline is approaching. There are scholarships for those who can't afford the cost.

The fourth annual conference of Effective Altruism, a growing intellectual movement that uses reason and evidence to improve the world as much as possible. This year, over 1000 attendees and over 50 speakers from around the world are expected to attend.
eaglobal.org

Article in Science on catastrophic risks, with researcher Anders Sandberg

Rare cataclysms are hard to study and plan for, but they may be too dangerous to ignore
sciencemag.org

Future of Humanity Institute (Oxford University) Director and Superintelligence author Nick Bostrom talks to the Financial Times Innovations Editor, John Thornh...ill about AI and whether we can control it. (Please note that this article is behind the FT's paywall)

"Bostrom believes ... that the problem [of controlling AI] may be difficult, but is not insoluble, provided we start early enough and apply enough mathematical talent. What would be terrible, in his view, would be to find ourselves on the brink of developing HLMI (Human Level Machine Intelligence) and realising that it’s too late to do anything to ensure humans retain control over our creations. “That seems like a stupid place to be, if we can avoid it.”

“Maybe the problem will turn out to be much easier than it seems, and that will be good. It still seems extremely prudent, though, to put in the work, in case it does turn out to be harder,” he says. “Whether the risk is 1 per cent, 80 per cent, or anywhere in between, it still makes sense to do some of the things I think should be done. The position is not sensitive to one’s level of optimism.”

See More
Scientists reckon there have been at least five mass extinction events in the history of our planet, when a catastrophically high number of species were wiped out in a relatively short period of time. We are possibly now living through a sixth —
ft.com

New paper from Google Research in collaboration with OpenAI on concrete problems in AI Safety, which outlines five technical problems related to accident risk in AI systems.

FHI researcher Jan Leike presents first decision-theoretic foundation for game theory.

Significant new results: Leike, Taylor, and Fallenstein describe a general-purpose formal foundation for game theory.

Future of Humanity Institute Research Fellow Jan Leike and MIRI Research Fellows Jessica Taylor and Benya Fallenstein have just presented new results at UAI 2016 that resolve a longstanding open problem in game theory: “A formal solution to the grain of truth problem.” Game theorists have techniques...
intelligence.org

CSER is hiring a research fellow to work on biotechnology risks.

The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford.
fhi.ox.ac.uk|By www.alz.consulting

FHI research fellow Owain Evans has helped to develop the OpenAI safety environments

Nice cross-section of perspectives on the near and far future of AI

The best minds in the business—Yann LeCun of Facebook, Luke Nosek of the Founders Fund, Nick Bostrom of Oxford University and Andrew Ng of Baidu—on what life will look like in the age of the machines
wsj.com