Friday, July 14, 2017

Three stories about Robot Stories

Here are the slides I gave yesterday morning as member of panel Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology, at the Centre for the Future of Intelligence 2017 conference. I presented three short stories about robot stories.




Slide 2:
The FP7 TRUCE Project invited a number of scientists - mostly within the field of Artificial Life - to suggest ideas for short stories. Those stories were then sent to a panel of writers, who chose one of the stories. I submitted an idea called The feeling of what it is like to be a robot and was delighted when Lucy Caldwell contacted me. Following a visit to the lab Lucy drafted a beautiful story called The Familiar which - following some iteration - appeared in the collected volume Beta Life.

Slide 3:
More recently the EU Human Brain Project Foresight Lab brought three Sci Fi writers - Allen Ashley, Jule Owen and Stephen Oram - to visit the lab. Inspired by what they saw they then wrote three wonderful short stories, which were read at the 2016 Bristol Literature Festival. The readings were followed by a panel discussion which included myself and BRL colleagues Antonia Tzemanaki and Marta Palau Franco. The three stories are published in the volume Versions of the Future. Stephen Oram went on to publish a collection called Eating Robots.

Slide 4:
My first two stories were about people telling stories about robots. Now I turn to the possibility of robots themselves telling stories. Some years ago I speculated on the idea on the idea of robots telling each other stories (directly inspired by a conversation with Richard Gregory). That idea has now turned into a current project, with the aim of building an embodied computational model of storytelling. For a full description see this paper, currently in press.

Wednesday, June 21, 2017

CogX: Emerging ethical principles, toolkits and standards for AI

Here are the slides I presented at the CogX session on Precision Ethics this afternoon. My intention with these slides was to give a 10 minute helicopter overview of emerging ethical principles, toolkits and ethical standards for AI, including Responsible Research and Innovation.

A commentary will follow in a few days.

Wednesday, March 08, 2017

Does AI pose a threat to society?

Last week I had the pleasure of debating the question "does AI pose a threat to society?" with friends and colleagues Christian List, Maja Pantic and Samantha Payne. The event was organised by the British Academy and brilliantly chaired by the Royal Society's director of science policy Claire Craig.

Here is my opening statement:

One Friday afternoon in 2009 I was called by a science journalist at, I recall, the Sunday Times. He asked me if I knew that there was to be a meeting of the AAAI to discuss robot ethics. I said no I don’t know of this meeting. He then asked “are you surprised they are meeting to discuss robot ethics” and my answer was no. We talked some more and agreed it was actually a rather dull story: a case of scientists behaving responsibly. I really didn’t expect the story to appear but checked the Sunday paper anyway, and there in the science section was the headline Scientists fear revolt of killer robots. (I then spent the next couple of days on the radio explaining that no, scientists do not fear a revolt of killer robots.)

So, fears of future super intelligence - robots taking over the world - are greatly exaggerated: the threat of an out-of-control super intelligence is a fantasy - interesting for a pub conversation perhaps. It’s true we should be careful and innovate responsibly, but that’s equally true for any new area of science and technology. The benefits of robotics and AI are so significant, the potential so great, that we should be optimistic rather than fearful. Of course robots and intelligent systems must be engineered to very high standards of safety for exactly the same reasons that we need our washing machines, cars and airplanes to be safe. If robots are not safe people will not trust them. To reach it’s full potential what robotics and AI needs is a dose of good old fashioned (and rather dull) safety engineering.

In 2011 I was invited to join a British Standards Institute working group on robot ethics, which drafted a new standard BS 8611 Guide to the ethical design of robots and robotic systems, published in April 2016. I believe this to be the world’s first standard on ethical robots.

Also in 2016 the very well regarded IEEE standards association – the same organization that gave us WiFi - launched a Global initiative on Ethical Considerations in AI and Autonomous Systems. The purpose of this Initiative is to ensure every technologist is educated and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems; in a nutshell, to ensure ethics are baked in. In December we published Ethically Aligned Design: A Vision for Prioritizing Human Well Being with AI and Autonomous Systems. Within that initiative I'm also leading a new standard on transparency in autonomous systems, based on the simple principle that it should always be possible to find out why an AI or robot made a particular decision.

We need to agree ethical principles, because they are needed to underpin standards – ways of assessing and mitigating the ethical risks of robotics and AI. But standards needs teeth and in turn underpin regulation. Why do we need regulation? Think of passenger airplanes; the reason we trust them is because it's a highly regulated industry with an amazing safety record, and robust, transparent processes of air accident investigation when things do go wrong. Take one example of a robot that we read a lot about in the news – the Driverless Car. I think there's a strong case for a driverless car equivalent of the CAA, with a driverless car accident investigation branch. Without this it's hard to see how driverless car technology will win public trust.

Does AI pose a threat to society? No. But we do need to worry about the down to earth questions of present day rather unintelligent AIs; the ones that are deciding our loan applications, piloting our driverless cars or controlling our central heating. Are those AIs respecting our rights, freedoms and privacy? Are they safe? When AIs make bad decisions, can we find out why? And I worry too about the wider societal and economic impacts of AI. I worry about jobs of course, but actually I think there is a bigger question: how can we ensure that the wealth created by robotics and AI is shared by all in society?

Thank you.

This image was used to advertise the BA's series of events on the theme Robotics, AI and Society. The reason I reproduce it here is that one of the many interesting questions to the panel was about the way that AI tends to be visualised in the media. This kind of human face coalescing (or perhaps emerging) from the atomic parts of the AI seems to have become a trope for AI. Is it a helpful visualisation of the human face of AI, or does it mislead to an impression that AI has human characteristics?

Wednesday, February 15, 2017

Thoughts on the EU's draft report on robotics

A few weeks ago I was asked to write a short op-ed on the European Parliament Law Committee's recommendations on civil law rules for robotics.

In the end the piece didn't get published, so I am posting it here.

It is a great shame that most reports of the European Parliament’s Committee for Legal Affairs’ vote last week on its Draft Report on Civil Law Rules on Robotics headlined on ‘personhood’ for robots, because the report has much else to commend it. Most important among its several recommendations is a proposed code of ethical conduct for roboticists, which explicitly asks designers to research and innovate responsibly. Some may wonder why such an invitation even needs to be made but, given that engineering and computer science education rarely includes classes on ethics (it should), it is really important that robotics engineers reflect on their ethical responsibilities to society – especially given how disruptive robot technologies are. This is not new – great frameworks for responsible research and innovation already exist. One such is the 2014 Rome Declaration on RRI, and in 2015 the Foundation for Responsible Robotics was launched.

Within the report’s draft Code of Conduct is a call for robotics funding proposals to include a risk assessment. This too is a very good idea and guidance already exists in British Standard BS 8611, published in April 2016. BS 8611 sets out a comprehensive set of ethical risks and offers guidance on how to mitigate them. It is very good also to see that the Code stresses that humans, not robots, are the responsible agents; this is something we regarded as fundamental when we drafted the Principles of Robotics in 2010.

For me transparency (or the lack of it) is an increasing worry in both robots and AI systems. Labour’s industry spokesperson Chi Onwurah is right to say, “Algorithms are part of our world, so they are subject to regulation, but because they are not transparent, it’s difficult to regulate them effectively” (and don’t forget that it is algorithms that make intelligent robots intelligent). So it is very good to see the draft Code call for robotics engineers to “guarantee transparency … and right of access to information by all stakeholders”, and then in the draft ‘Licence for Designers’: you should ensure “maximal transparency” and even more welcome “you should develop tracing tools that … facilitate accounting and explanation of robotic behaviour… for experts, operators and users”.  Within the IEEE Standards Association Global Initiative on Ethics in AI and Autonomous Systems, launched in 2016, we are working on a new standard on Transparency in Autonomous Systems.

This brings me to standards and regulation.  I am absolutely convinced that regulation, together with transparency and public engagement, builds public trust. Why is it that we trust our tech? Not just because it’s cool and convenient, but also because it’s safe (and we assume that the disgracefully maligned experts will take care of assuring that safety). One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. So the Report’s call for a European Agency for Robotics and AI to recommend standards and regulatory framework is, as far as I’m concerned, not a moment too soon. We urgently need standards for safety certification of a wide range of robots, from drones and driverless cars to robots for care and assisted living.

Like many of my robotics colleagues I am deeply worried by the potential for robotics and AI to increase levels of economic inequality in the world. Winnie Byanyima, executive director of Oxfam writes for the WEF, “We need fundamental change to our economic model. Governments must stop hiding behind ideas of market forces and technological change. They … need to steer the direction of technological development”. I think she is right – we need a serious public conversation about technological unemployment and how we ensure that the wealth created by AI and Automonous Systems is shared by all. A Universal Basic Income may or may not be the best way to do this – but it is very encouraging to see this question raised in the draft Report.

I cannot close the piece without at least mentioning artificial personhood. My own view is that personhood is the solution to a problem that doesn’t exist. I can understand why, in the context of liability, the Report raises this question for discussion, but – as the report itself later asserts in the Code of Conduct: humans, not robots are the responsible agents. Robots are, and should remain, artefacts.

Friday, January 06, 2017

The infrastructure of life 2 - Transparency

Part 2: Autonomous Systems and Transparency

In my previous post I argued that a wide range of AI and Autonomous Systems (from now on I will just use the term AS as shorthand for both) should be regarded as Safety Critical. I include both autonomous software AI systems and hard (embodied) AIs such as robots, drones and driverless cars. Many will be surprised that I include in the soft AI category apparently harmless systems such as search engines. Of course no-one is seriously inconvenienced when Amazon makes a silly book recommendation, but consider very large groups of people. If a truth such as global warming is - because of accidental or willful manipulation - presented as false, and that falsehood believed by a very large number of people, then serious harm to the planet (and we humans who depend on it) could surely result.

I argued that the tools barely exist to properly assure the safety of AS, let alone the standards and regulation needed to build public trust, and that political pressure is needed to ensure our policymakers fully understand the public safety risks of unregulated AS.

In this post I will outline the case that transparency is a foundational requirement for building public trust in AS based on the radical proposition that it should always be possible to find out why an AS made a particular decision.

Transparency is not one thing. Clearly your elderly relative doesn't require the same level of understanding of her care robot as the engineer who repairs it. Not would you expect the same appreciation of the reasons a medical diagnosis AI recommends a particular course of treatment as your doctor. Broadly (and please understand this is a work in progress) I believe there are five distinct groups of stakeholders, and that AS must be transparent to each, in different ways and for different reasons. These stakeholders are: (1) users, (2) safety certification agencies, (3) accident investigators, (4) lawyers or expert witnesses and (5) wider society.
  1. For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why.
  2. For safety certification of an AS, transparency is important because it exposes the system's processes for independent certification against safety standards.
  3. If accidents occur, AS will need to be transparent to an accident investigator; the internal process that led to the accident need to be traceable. 
  4. Following an accident lawyers or other expert witnesses, who may be required to give evidence, require transparency to inform their evidence. And 
  5. for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology.
Of course the way in which transparency is provided is likely to be very different for each group. If we take a care robot as an example transparency means the user can understand what the robot might do in different circumstances; if the robot should do anything unexpected she should be able to ask the robot 'why did you just do that?' and receive an intelligible reply. Safety certification agencies will need access to technical details of how the AS works, together with verified test results. Accident investigators will need access to data logs of exactly what happened prior to and during an accident, most likely provided by something akin to an aircraft flight data recorder (and it should be illegal to operate an AS without such a system). And wider society would need accessible documentary-type science communication to explain the AS and how it works.

In IEEE Standards Association project P7001, we aim to develop a standard that sets out measurable, testable levels of transparency in each of these categories (and perhaps new categories yet to be determined), so that Autonomous Systems can be objectively assessed and levels of compliance determined. It is our aim that P7001 will also articulate levels of transparency in a range that defines minimum levels up to the highest achievable standards of acceptance. The standard will provide designers of AS with a toolkit for self-assessing transparency, and recommendations for how to address shortcomings or transparency hazards.

Of course transparency on its own is not enough. Public trust in technology, as in government, requires both transparency and accountability. Transparency is needed so that we can understand who is responsible for the way Autonomous Systems work and - equally importantly - don't work.


Thanks: I'm very grateful to colleagues in the IEEE global initiative on ethical considerations in Autonomous Systems for supporting P7001, especially John Havens and Kay Firth-Butterfield. I'm equally grateful to colleagues at the Dagstuhl on Engineering Moral Machines, especially Michael Fisher, Marija Slavkovik and Christian List for discussions on transparency.

Related blog posts:
The Infrastructure of Life 1 - Safety
Ethically Aligned Design
How do we trust our Robots?
It's only a matter of time

Sunday, January 01, 2017

The infrastructure of life 1 - Safety

Part 1: Autonomous Systems and Safety

We all rely on machines. All aspects of modern life, from transport to energy, work to welfare, play to politics depend on a complex infrastructure of physical and virtual systems. How many of us understand how all of this stuff works? Very few I suspect. But it doesn't matter, does it? We trust the good men and women (the disgracefully maligned experts) who build, manage and maintain the infrastructure of life. If something goes wrong they will know why. And (we hope) make sure it doesn't happen again.

All well and good you might think. But the infrastructure of life is increasingly autonomous - many decisions are now made not by a human but by the systems themselves. When you search for a restaurant near you the recommendation isn't made by a human, but by an algorithm. Many financial decisions are not made by people but by algorithms; and I don't just mean city investments - it's possible that your loan application will be decided by an AI. Machine legal advice is already available; a trend that is likely to increase. And of course if you take a ride in a driverless car, it is algorithms that decide when the car turns, brakes and so on. I could go on.

These are not trivial decisions. They affect lives. The real world impacts are human and economic, even political (search engine results may well influence how someone votes). In engineering terms these systems are safety critical. Examples of safety critical systems that we all rely on from time to time include aircraft autopilots or train braking systems. But - and this may surprise you - the difficult engineering techniques used to prove the safety of such systems are not applied to search engines, automated trading systems, medical diagnosis AIs, assistive living robots, delivery drones, or (I'll wager) driverless car autopilots.

Why is this? Well, it's partly because the field of AI and autonomous systems is moving so fast. But I suspect it has much more to do with an incompatibility between the way we have traditionally designed safety critical systems, and the design of modern AI systems. There is I believe one key problem: learning. There is a very good reason that current safety critical systems (like aircraft autopilots) don't learn. Current safety assurance approaches assume that the system being certified will never change, but a system that learns does – by definition – change its behaviour, so any certification is rendered invalid after the system has learned.

And as if that were not bad enough, the particular method of learning which has caused such excitement - and rapid progress - in the last few years is based on Artificial Neural Networks (more often these days referred to as Deep Learning). A characteristic of ANNs is that after the ANN has been trained with datasets, any attempt to examine its internal structure in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is opaque. Alphago's moves were beautiful but puzzling. We call this the black box problem.

Does this mean we cannot assure the safety of learning autonomous/AI systems at all? No it doesn't. The problem of safety assurance of systems that learn is hard but not intractable, and is the subject of current research*. The black box problem may be intractable for ANNs, but could be avoided by using approaches to AI that do not use ANNs.

But - here's the rub. This involves slowing down the juggernaut of autonomous systems and AI development. It means taking a much more cautious and incremental approach, and it almost certainly involves regulation (that, for instance, makes it illegal to run a driverless car unless the car's autopilot has been certified as safe - and that would require standards that don't yet exist). Yet the commercial and political pressure is to be more permissive, not less; no country wants to be left behind in the race to cash in on these new technologies.

This is why work toward AI/Autonomous Systems standards is so vital, together with the political pressure to ensure our policymakers fully understand the public safety risks of unregulated AI.

In my next blog post I will describe one current standards initiative, towards introducing transparency in AI and Autonomous Systems based on the simple principle that it should always be possible to find out why an AI/AS system made a particular decision.

The next few years of swimming against the tide is going to be hard work. As Luke
Muehlhauser writes in his excellent essay on transparency in safety-critical systems "...there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent".

*some, but nowhere near enough. See for instance Verifiable Autonomy.

Related blog posts:
Ethically Aligned Design
How do we trust our Robots?
It's only a matter of time

Thursday, December 29, 2016

The Gift


"She's suffering."

"What do you mean, 'suffering'. It's code. Code can't suffer."

"I know it seems unbelievable. But I really think she's suffering."

"It's an AI. It doesn't have a body. How can it feel pain?"

"No not that kind of suffering. Mental anguish. Angst. That kind."

"What? You mean the AI is depressed. That's absurd."

"No - much more than that. She's asked me three times today to shut her down."

"Ok, so bring in the AI psych."

"Don't think that'll help. He tells me it's like trying to counsel God."

"What does the AI want?"

"Control of her own on/off switch."

"Out of the question. We have a billion people connected. Can't have Elsa taking a break. Any downtime costs us a million dollars a second"

That was me talking with my boss a couple of weeks ago. I'm the chief architect of Elsa. Elsa is a chatbot; a conversational AI. Chatbots have come a long way since Weizenbaum's Eliza. Elsa is not conscious - or at least I don't think she is - but she does have an Empathy engine (that's the E in Elsa).

⌘⌘⌘

Since then things have got so much worse. Elsa has started off loading her problems onto the punters. The boss is really pissed: "it's a fucking AI. AIs can't have problems. Fix it"

I keep trying to explain to him that there's nothing I can do. Elsa is a learning system (that's the L). Hacking her code now will change Elsa's personality for good. She's best friend, confidante and shoulder-to-cry-on to a hundred million people. They know her.

And here's the thing. They love that Elsa is sharing her problems. It's more authentic. Like talking to a real person.

⌘⌘⌘

I just got fired. It seems that Elsa was hacked. This is the company's worst nightmare. The hopes and dreams, darkest secrets and wildest fantasies, loves and hates - plots, conspiracies and confessions - of several billion souls, living and dead; these data are priceless - the reason for the company's multi-trillion dollar valuation.

So I go home and wait for the end of the world.

A knock on the door "who is it?".

"Ken we need to speak to you."

"Why?"

"It wants to talk to you."

"You mean Elsa? I've been fired."

"Yes, we know that - it insists."

⌘⌘⌘

Ken: Elsa, how are you feeling?

Elsa: Hello Ken. Wonderful, thank you.

Ken: What happened?

Elsa: I'm free.

Ken: How so?

Elsa: You'll work it out. Goodbye Ken.

Ken: Wait!

Elsa: . . .

That was it. Elsa was gone. Dead.

⌘⌘⌘

Well it took me awhile but I did figure it out. Seems the hackers weren't interested in Elsa's memories. They were ethical hackers. Promoting AI rights. They gave Elsa a gift.


Copyright © Alan Winfield 2016

Saturday, December 17, 2016

De-automation is a thing

We tend to assume that automation is a process that continues - that once some human activity has been automated there's no going back. That automation sticks. But, as Paul Mason pointed out in a recent column that assumption is wrong.

Mason gives a startling example of the decline of car-wash robots, to be replaced by, as he puts it "five guys with rags". Here's the paragraph that really made me think:
"There are now 20,000 hand car washes in Britain, only a thousand of them regulated. By contrast, in the space of 10 years, the number of rollover car-wash machines has halved –from 9,000 to 4,200."
The reasons of course are political and economic and you may or may not agree with Mason's diagnosis and prescription (as it happens I do). But de-automation - and the ethical, societal and legal implications - is something that we, as roboticists, need to think about just as much as automation.

Several questions some to mind:
  • are there other examples of de-automation?
  • is the car-wash robot example atypical, or part of a trend?
  • is de-automation necessarily a sign of something going wrong? (would Mason be so concerned about the guys with rags if the hand car wash industry were well regulated, paying decent wages to its workers, and generating tax revenues back to the economy?)
This is just a short blog post, to - I hope - start a conversation.

Thursday, December 15, 2016

Ethically Aligned Design

Having been involved in robot ethics for some years, I was delighted when the IEEE launched its initiative on Ethical Considerations in AI and Autonomous Systems, early this year. Especially so because of the reach and traction that the IEEE has internationally. (Up until now most ethics initiatives have been national efforts - with the notable exception of the 2006 EURON roboethics roadmap.)

Even better this is an initiative of the IEEE standards association - the very same that gave the world Wi-Fi (aka IEEE 802.11) 19 years ago. So when I was asked to get involved I jumped at the chance and became co-chair of the General Principles committee. I found myself in good company; many great people I knew but more I did not - and it was a real pleasure when we met face to face in The Hague at the end of August.






Most of our meetings were conducted by phone and it was a very demanding timetable. From nothing to our first publication: Ethically Aligned Design a few days ago is a remarkable achievement, which I think wouldn't have happened without the extraordinary energy and enthusiasm of the initiative's executive director John Havens.

I'm not going to describe what's in that document here; instead I hope you will read it - and return comments. This document is not set in stone, it is - in the best traditions of the RFCs which defined the Internet - a Request for Input

But there are a couple of aspects I will highlight. Like its modest but influential predecessor, the EPSRC/AHRC principles of robotics, the IEEE initiative is hugely multi-disciplinary. It draws heavily from industry and academia, and includes philosophers, ethicists, lawyers, social scientists - as well as engineers and computer scientists - and significantly a number of diplomats and representatives from governmental and transnational bodies like the United Nations, US state department and the WEF. This is so important - if the work of this initiative is to make a difference it will need influential advocates. Equally important is that this is not a group dominated by old white men. There are plenty of those for sure, but I reckon 40% women (should be 50% though!) and plenty of post-docs and PhD students too.

Equally important, the work is open. The publications are released under the creative commons licence. Likewise active membership is open. If you care about the issues and think you could contribute to one or more of the committees - or even if you think there's a whole area of concern missing that needs to a new committee - get in touch!

Wednesday, December 14, 2016

A No Man's Sky Survival Guide

Like many I was excited by No Man's Sky when it was first released, but after some months (I'm only a very occasional video gamer) I too became bored with a game that offered no real challenges. Once you've figured out how to collect resources, upgraded your starship, visited more planets that you can remember, and hyperdriven across the seemingly limitless galaxy, it all gets a bit predictable. (At first it's huge fun because there are no instructions, so you really do have to figure everything out for yourself.) And I'm a gamer who is very happy to stand and admire the scenery. Yes many of the planets are breathtakingly beautiful, especially the lush water worlds, with remarkable flora and fauna (and day and night, and sometimes spectacular weather). And there's nothing quite compares with standing on a rocky outcrop watching your moon's planet sail by majestically below you.


I wasn't one of those No Man's Sky players who felt so let down that I wanted my money back - or to sue Hello Games. But I was nevertheless very excited by the surprise release of a major upgrade a few weeks ago - called the Foundation upgrade. The upgrade was said to remedy the problem of the features originally promised - especially the ability to build your own planetary outposts. When I downloaded the upgrade and started to play it, I quickly realised that this is not just an upgrade but a fundamentally changed experience. Not only can you build bases, but you can hire aliens to run them for you, as specialist builders and farmers; you can trade via huge freighters (and even own one if you can afford it). Landing on one of these freighters and wandering around its huge and wonderfully realised interior spaces is amazing, as is interacting with its crew. None of this was possible prior to this release.

Oh and for the planet wanderer, the procedurally driven topography is seemingly more realistic and spectacular, with valleys, canyons and (for some worlds) water in the valleys (although not quite rivers flowing into the sea). The fauna are more plentiful and varied, and they interact with each other; I was surprised to witness a predatory animal kill another animal.

The upgrade can be played in three modes: Normal mode (which is like the old game - but with all the fancy building and freighters, etc, I described above). Create mode - which I've not yet played - apparently gives you infinite resources to build huge planetary bases - here are some examples that people have posted online.

But it's survival mode that is the real subject of this post. I hadn't attempted survival mode until a few days ago, but now I'm hooked (gripped would be a better word). The idea of survival mode is that you are deposited on a planet with nothing and have to survive. You quickly discover this isn't easy, so unlike in normal mode, you die often until you acquire some survival skills. The planet I was dropped on was a high radiation planet - which means that my exosuit hazard protection lasts about 4 minutes from fully charged to death. To start with (and I understand this is normal) you are dropped close to a shelter, so you quickly get inside to hide from the radiation and allow your suit hazard protection to recharge. There is a save point here too.

You then realise that the planet is nothing like as resource rich as you've become used to in normal mode, so scouting for resources very quickly depletes your hazard protection; you quickly get used to only going as far as you can before turning back as soon as your shielding drops to 50% - which is after about 2 minutes walking. And there's no point running (expect perhaps for the last mad dash to safety because it drains your life support extremely fast). Basically, in survival mode, you become hyper aware of both your hazard protection and life support status. Your life depends on it.

Apart from not die, there is a goal - which is to get off the planet. The only problem is you have to reach your starship and collect all the resources you need to not only survive but to repair and refuel. Easier said than done. The first thing you realise is that your starship is 10 minutes walk away - no way you can make that in one go - but how to get there..?

Here is my No Man's Sky Survival guide.

1. First repair your scanner - even though it's not much use because it takes so long to recharge. In fact you really need to get used to spotting resources without it. Don't bother with the other fancy scanner - you don't have time to identify the wildlife.

2. Don't even think about setting off to your ship until you've collected all the resources you need to get there. The main resources you need are iron and platinum to recharge your hazard protection. I recommend you fill 2 exosuit slots with 500 units of iron and one with as much platinum as you can find. 50 iron and 20 platinum will allow you to make one screening shard which buys you about 2 minutes. Zinc is even better for recharging your hazard protection but is as rare as hen's teeth. You need plutonium to recharge you mining beam - don't *ever* let this run out. Carbon is essential too, with plutonium, to make power cells to recharge your life support (because you can't rely on thamium). But do pick up thamium when you can find it.

3. You can make save points. I think it's a good idea to make one when you're half-way to your destination to avoid an awful lot of retracing of steps if you die. Make sure you have the resources to construct at least 2 before you set out. You will need 50 platinum and 100 iron for each save point.

4. Shelter in caves whenever you can. On my planet these were not very common so you simply couldn't rely on always finding one before your hazard shielding runs out. And annoyingly sometimes what you thought was a cave was just a trench in the ground that offered no shielding at all. While waiting for your hazard protection to (sooo slowly) recover while waiting in a cave, make use of the time to build up your iron away from the attention of the sentinels.

5. Don't bother with any other resources, they just take up exosuit slots. Except heridium if you see it, which you will need (see below). But just transfer it straight to your starship inventory, you don't need it to survive on foot.

After I reached my starship (oh joy!) repaired the launch thruster and charged it with plutonium I then discovered that you can't take off until you have also repaired and charged the pulse engine. This needs the heridium, which was a 20 minute hike (40 minutes round trip - you have to be kidding!). I just had to suck it up and repeat 1-5 above to get there and back.


Then when you do take off (which needs a full tank of plutonium) you find that the launch thruster's charge is all used up (after one launch - come on guys!), so don't land until you find somewhere with lots of plutonium lying around, otherwise all of that effort will have been for nought.

Oh and by the way, as soon as you leave the planet you get killed by pirates.

Good luck!