Deleted member 795
OK...make a xerox of something. Then xerox that. Then that next one. And so on.
You'll notice that the image loses clarity gradually, and other odd patterns imposed by the copier start to affect the subsequent copies. This is, of course, "generation loss"; we can hear the same thing when you copy a tape to another tape, then copy that to a third, etc. Probably the best example of how a "generation loss" system can work is Alvin Lucier's "I am sitting in a room", where the tape machine, room acoustics, playback devices, etc gradually impose themselves on an original recording of Lucier explaining the motive of the work.
So...if systems can impose generational "ghosts" onto material via repeated recopying, what if you could do the same thing to Jukebox AI? This would be a bit more complicated, to be sure, given how the AI program works. But I think it might be worthwhile to explore this. Not only would it kill the "fair use" quandry, it introduces an intricate element of unpredictability. For example, taking a 10th-generation AI-generated clip to generate its "result" would be more likely to exhibit the inherent AI weightings and instructions in the output than you'd get from an original reworking by the AI. It's also pretty difficult (if not impossible) to predict what Jukebox AI will do in a feedback loop like this, so this makes the potential output of the AI quite desirable due to the extreme "uncanny valley" effects that will likely occur.
Give it a shot...whatcha got to lose?
You'll notice that the image loses clarity gradually, and other odd patterns imposed by the copier start to affect the subsequent copies. This is, of course, "generation loss"; we can hear the same thing when you copy a tape to another tape, then copy that to a third, etc. Probably the best example of how a "generation loss" system can work is Alvin Lucier's "I am sitting in a room", where the tape machine, room acoustics, playback devices, etc gradually impose themselves on an original recording of Lucier explaining the motive of the work.
So...if systems can impose generational "ghosts" onto material via repeated recopying, what if you could do the same thing to Jukebox AI? This would be a bit more complicated, to be sure, given how the AI program works. But I think it might be worthwhile to explore this. Not only would it kill the "fair use" quandry, it introduces an intricate element of unpredictability. For example, taking a 10th-generation AI-generated clip to generate its "result" would be more likely to exhibit the inherent AI weightings and instructions in the output than you'd get from an original reworking by the AI. It's also pretty difficult (if not impossible) to predict what Jukebox AI will do in a feedback loop like this, so this makes the potential output of the AI quite desirable due to the extreme "uncanny valley" effects that will likely occur.
Give it a shot...whatcha got to lose?