Passer au document

Nothing is true everything is permitted v2 Week 2

Syllabus du cours d'anglais (Q2 - Partie 7) annoté
Cours

Anglais niveau 2 (LANG0087-4)

48 Documents
Les étudiants ont partagé 48 documents dans ce cours
Année académique : 2020/2021
Uploadé par:
0follower
112Téléchargements
39upvotes

Commentaires

Merci de s'identifier ou s’enregistrer pour poster des commentaires.

Aperçu du texte

Grammar (2)

Type 0 Conditionals

The pattern is if... + present... + present.

If the doorbells rings, the dog barks

If you heat iron, it expands

Here the pattern means that one thing always follows automatically from another. We can use

when instead of if.

If/when I reverse the car, it makes a funny noise

We can also use Type 0 for the automatic result of a possible future action.

If the team win tomorrow, they get promoted to a higher league

Type 1 Conditionals

The pattern is if... + present... + will.

If it rains, the reception will take place indoors

The if-clause expresses an real condition. It leaves open the question whether it will rain or not.

Here the present simple expresses future time. We do not normally use will in an open condition.

As well as the present simple, we cab use the continuous or the perfect.

If we’re having ten people over, we’ll need to pay the police

If I’ve finished my work by ten, I will probably watch a film on Netflix

Type 2 Conditionals

The pattern is if... + past... +would.

If I had lots of money, I would travel round the world

Here the past tense expresses an unreal condition. We do not use would for an unreal condition.

We also use the Type 2 pattern for a theoretical possibility in the future.

If you lost the book, you would have to pay for a new one.

Here the past tense expresses an imaginary future action such as losing the book.

Compare Types 1 and 2 for possible future actions:

Type 1 If we stay in a hotel, it will be expensive

Type 2 If we stayed in a hotel, it would be expensive

Type 1 expresses the action as an open possibility. (We may or may not stay in a hotel.) Type 2

expresses the action as a theoretical possibility, something more distant from reality.

As well as the past simple, we can use the continuous or could.

If the sun was shining, everything would be perfect

If I could help, I would, but I’m afraid I can’t

Type 3 Conditionals

The pattern is if... + past perfect...+ would + perfect infinitive.

If you had taken the taxi, you would have arrived in time

Here the past perfect refers to something unreal, an imaginary past action. The conditional clause

means that you didn’t take one, in this case.

We cannot use the past simple or perfect in the main clause.

Should, were, had , and inversion

The following types of clause are rather formal.

We can use should in an if-clause to talk about something which is possible but not very likely.

I’m not expecting any calls, but if anyone should ring, could you please take a message?

We can also use happen to in this case:

If anyone happens to ring/should happen to ring, could you please take a message?

Sometimes we use the subjunctive were instead of was.

If the picture were/was genuine, it would be worth thousands of pounds

We can also use were for a theoretical possibility:

If the decision were to go against us, we would appeal

We can express a condition with should or the subjunctive were by inverting the subject and verb.

Should anyone ring, could you please take a message?

Were the picture genuine, it would be worth thousands of pounds

We can do the same with the past perfect:

Had you taken a taxi, you would have arrived in time

But an if-clause is more common, especially in informal English.

Practice (2)

the rest of eternity screaming in its torture chamber. It’s like the videotape in The Ring. Even death is no escape, for if you die, Roko’s Basilisk will resurrect you and begin the torture again.

Are you sure you want to keep reading? Because the worst part is that Roko’s Basilisk already exists. Or at least, it already will have existed —which is just as bad.

Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point for highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality. LessWrong’s founder, Eliezer Yudkowsky, is a significant figure in techno-futurism; his research institute, the Machine Intelligence Research Institute, which funds and promotes research around the advancement of artificial intelligence, has been boosted and funded by high-profile techies like Peter Thiel and Ray Kurzweil, and Yudkowsky is a prominent contributor to academic discussions of technological ethics and decision theory. What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.

One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

Listen to me very closely, you idiot. YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL. You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.

Some background is in order. The LessWrong community is concerned with the future of humanity, and in particular with the singularity—the hypothesized future point at which computing power becomes so great that superhuman artificial intelligence becomes possible, as does the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself. The term was coined in 1958 in a conversation between mathematical geniuses Stanislaw Ulam and John von Neumann, where von Neumann said, “The ever accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil popularized the term, and as with many interested in the singularity, they believe that exponential increases in computing power will cause the singularity to happen very soon— within the next 50 years or so. Kurzweil is chugging 150 vitamins a day to stay alive until the singularity, while Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes

who want to live forever. “If you don’t sign up your kids for cryonics then you are a lousy parent ,” Yudkowsky writes.

If you believe the singularity is coming and that very powerful AIs are in our future, one obvious question is whether those AIs will be benevolent or malicious. Yudkowsky’s foundation, the Machine Intelligence Research Institute, has the explicit goal of steering the future toward “friendly AI.” For him, and for many LessWrong posters, this issue is of paramount importance, easily trumping the environment and politics. To them, the singularity brings about the machine equivalent of God itself.

Yet this doesn’t explain why Roko’s Basilisk is so horrifying. That requires looking at a critical article of faith in the LessWrong ethos: timeless decision theory. TDT is a guideline for rational action based on game theory, Bayesian probability, and decision theory, with a smattering of parallel universes and quantum mechanics on the side. TDT has its roots in the classic thought experiment of decision theory called Newcomb’s paradox, in which a superintelligent alien presents two boxes to you:

The alien gives you the choice of either taking both boxes, or only taking Box B. If you take both boxes, you’re guaranteed at least $1,000. If you just take Box B, you aren’t guaranteed anything. But the alien has another twist: Its supercomputer, which knows just about everything, made a prediction a week ago as to whether you would take both boxes or just Box B. If the supercomputer predicted you’d take both boxes, then the alien left the second box empty. If the supercomputer predicted you’d just take Box B, then the alien put the $1 million in Box B.

So, what are you going to do? Remember, the supercomputer has always been right in the past.

This problem has baffled no end of decision theorists. The alien can’t change what’s already in the boxes, so whatever you do, you’re guaranteed to end up with more money by taking both boxes than by taking just Box B, regardless of the prediction. Of course, if you think that way and the computer predicted you’d think that way, then Box B will be empty and you’ll only get $1,000. If the computer is so awesome at its predictions, you ought to take Box B only and get the cool million, right? But what if the computer was wrong this time? And regardless, whatever the computer said then can’t possibly change what’s happening now , right? So prediction be damned, take both boxes! But then ...

The maddening conflict between free will and godlike prediction has not led to any resolution of Newcomb’s paradox, and people will call themselves “one-boxers” or “two-boxers” depending on where they side. (My wife once declared herself a one-boxer, saying, “I trust the computer.”)

TDT has some very definite advice on Newcomb’s paradox: Take Box B. But TDT goes a bit further. Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and

If you do not subscribe to the theories that underlie Roko’s Basilisk and thus feel no temptation to bow down to your once and future evil machine overlord, then Roko’s Basilisk poses you no threat. (It is ironic that it’s only a mental health risk to those who have already bought into Yudkowsky’s thinking.) Believing in Roko’s Basilisk may simply be a “referendum on autism,” as a friend put it. But I do believe there’s a more serious issue at work here because Yudkowsky and other so-called transhumanists are attracting so much prestige and money for their projects, primarily from rich techies. I don’t think their projects (which only seem to involve publishing papers and hosting conferences) have much chance of creating either Roko’s Basilisk or Eliezer’s Big Friendly God. But the combination of messianic ambitions, being convinced of your own infallibility, and a lot of cash never works out well, regardless of ideology, and I don’t expect Yudkowsky and his cohorts to be an exception.

Define the following words/concepts from the text using your own words:

  1. Urban legend
  2. Prominent
  3. Legend
  4. Is chugging
  5. Paramount
  6. Maddening
  7. Get bupkis
  8. Summary
  9. Far-fetched
  10. Overlord

Which words/concepts from the text have the following definitions?

  1. Magnetic tape for recording and reproducing visual images and sound
  2. Makes or becomes unclear or less distinct
  3. Rearranging or rewriting (data, software, etc.) to improve efficiency of retrieval or processing
  4. The action, treated as a criminal offence, of demanding money from someone in return for not revealing compromising information which one has about them
  5. Risky; dangerous
  6. The characteristic spirit of a culture, era, or community as manifested in its attitudes and aspirations
  7. A statement or proposition which, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems logically unacceptable or self- contradictory
  8. Moral principles that govern a person's behaviour or the conducting of an activity
  9. A set of reasons or a logical basis for a course of action or belief
  10. A developmental disorder of variable severity that is characterized by difficulty in social interaction and communication and by restricted or repetitive patterns of thought and behaviour
Ce document a-t-il été utile ?

Nothing is true everything is permitted v2 Week 2

Cours: Anglais niveau 2 (LANG0087-4)

48 Documents
Les étudiants ont partagé 48 documents dans ce cours
Ce document a-t-il été utile ?
Page 1 of 7
Grammar (2)
Type 0 Conditionals
The pattern is if... + present... + present.
If the doorbells rings, the dog barks
If you heat iron, it expands
Here the pattern means that one thing always follows automatically from another. We can use
when instead of if.
If/when I reverse the car, it makes a funny noise
We can also use Type 0 for the automatic result of a possible future action.
If the team win tomorrow, they get promoted to a higher league
Type 1 Conditionals
The pattern is if... + present... + will.
If it rains, the reception will take place indoors
The if-clause expresses an real condition. It leaves open the question whether it will rain or not.
Here the present simple expresses future time. We do not normally use will in an open condition.
As well as the present simple, we cab use the continuous or the perfect.
If we’re having ten people over, we’ll need to pay the police
If I’ve finished my work by ten, I will probably watch a film on Netflix
Type 2 Conditionals
The pattern is if... + past... +would.
If I had lots of money, I would travel round the world
Here the past tense expresses an unreal condition. We do not use would for an unreal condition.
We also use the Type 2 pattern for a theoretical possibility in the future.
If you lost the book, you would have to pay for a new one.
Here the past tense expresses an imaginary future action such as losing the book.
Compare Types 1 and 2 for possible future actions:
Type 1
If we stay in a hotel, it will be expensive
Type 2
If we stayed in a hotel, it would be expensive