Deepfakes use AI technology in a number of fairly different ways that impact how

Deepfakes use AI technology in a number of fairly different ways that impact how we think about its effect on a person’s privacy. We’ll consider two significantly different uses of AI in this exercise.
The first approach uses AI technology to make small changes to a video captured of a person in a particular situation. The key point here is that the person in the resulting AI-doctored video was actually in that particular situation doing many, but not all of the things shown.
A good example of this approach is the use of AI to create better dubbing of a film into a foreign language (i.e., not the language spoken in the original shoot). The approach works as follows: Given the original film with an actor saying her lines in one language and a video of another actor performing the script in another language, the AI system produces a resulting film where the original actor’s lips are seamlessly replaced with those of the other actor. It would also replace the original actor’s soundtrack with that of the other actor.
You might use this same technology to create a funny video of your family dog saying something, in the way that today’s memes caption a picture or short video of a dog with a humorous saying. You’d need only the video of your dog, a video of you saying the humorous line, and the earlier described dubbing software.
Now let’s consider a different approach. This time, instead of using AI technology to make small changes to a video of a person in a particular situation, we use the technology to make it look like the person was in a completely different situation. The key point here is that the person in the resulting AI-doctored video was never in that situation doing any of the things shown.
An example of this alternative approach is what is done to create revenge-porn videos. The approach works as follows: Given a video of some person in a compromising situation and a collection of videos or digital images of the target person, the AI system produces a video where the target person’s face is seamlessly placed on the head of the actual person in the compromising video. The system may also make changes to the video’s soundtrack to make it even more believable.
Now let’s flip things. Spend a few minutes thinking about harmful uses of the first approach and harmless uses of the second.
Reflect on the ease or difficulty of what we just asked you to do. Categorize the benefits and harms as accruing to an individual or to society. Record your thoughts in the text box below. What you write will NOT be shared with anyone else.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount