cody1
observer
- Joined
- Sep 25, 2021
- Messages
- 27
- Reaction score
- 105
- Awards
- 22
This thread has been viewed 3070 times.
Back in the day we used to fill the two word captchas with always one word being the n-word. Don't think it did much but it was always funny.
As for people being concerned you're performing text recognition to train an AI: OCR software is nothing new (I've worked with it and still do, currently for an automated invoice scanning physical to digital service). It's likely you were in the past, but at this point OCR software is so good that the only human work left to do in an OCR system is context-training, IE recognizing that some piece of text relates to a specific thing, and not just the computer being able to recognize the letters themselves.
I really hate the new 4chan captcha too, makes me post there less and less.
Is that how that works? I couldn't understand it at allI'm probably the only person on earth who actually likes the new 4chan captcha. Matching up the distorted letters is less annoying and takes less time than reCAPTCHA's stupid picture recognition thing. Plus, it's (probably) not helping Google make more money.
It's exactly like the old-school captchas with distorted text, you just have to move the scroll bar around until you get something legible. It's fairly easy once you get the hang of it.Is that how that works? I couldn't understand it at all
I recommend reading a book called Toxic Sludge Is Good For You. It investigates cases of American PR companies manipulation of the public, covers a lot about astroturfing. It's shocking to see that public manipulation is an old thing, at least 50 years ago they had very good techniques already. My favourite chapter was about the tobacco industry and how they hired PR companies to astroturf pro-tobacco movements under the guise of American freedomsAs to social media, there have already been astroturfing (false grass-root movement campaigns) by political interests on important social thought. Just look up "political astroturfing" - I don't want to give examples for fear of appearing biased.
I just did this on a hCaptcha for the first time and holy shit you're right. I didn't even pick any of the right things and it still put me through. I'm gonna do the 50% thing every time from now onYou'll be interested to note that you do not have to be correct on 100% of the images to pass the verification, so I aim for 50% to blur the resulting data as best I can.
This would be so cool. I've never played touhou but imagine the bar for entering a website was beating a super difficult videogame level or challenge. I think logging in would feel a lot more rewarding considering it takes a certain amount of learned skill to enter everytime. Or maybe I would just get annoyed and stop using it.I've been getting some weird captchas recently, as well. Anyone know how I'm supposed to solve this one?
View attachment 36934
I already live like this. Also, I thought I was the only person who couldn't solve 4chan's pain in the ass captchas. But I might just be retarded.Tbh in the coming years I see widespread use of malicious AI beginning a new era for the internet. One where truth is nearly impossible to verify, where you can no longer be reasonably confident that a person on a forum, in voice chat, or in a multiplayer game is real.
looool. dont know what you are talking about schizo, check out this cool thing i found on tiktokPardon me if someone already said something along these lines (I just skimmed this thread!).
There's a very clear purpose behind all this AI training—at least, in my eyes. The aim is to fully automate surveillance, and especially the process of tracking someone through a breadcrumb trail of photos.
Consider the Captchas that test you on pictures of cars, planes, and bikes. Now, consider you're a detective trying to catch a criminal, and you have 6000 photos available from the suspect's iCloud account (as well as the accounts of his friends & family, which they handed over voluntarily). You have several witness reports that, at the scene of the crime, the criminal was on a bike. Rather than going through each picture individually, you have an AI check for bikes and provide you all the photos. Apple already has this kind of tech built into iOS.
Now, imagine our hypothetical criminal is guilty of hate speech.
Is it fun yet?
This is already technology that has been rolled out in China and Russia. I'm sure it's already being used in America and Europe, bu it's behind the scenes and they cover it with pretending to get that information from other avenues.Pardon me if someone already said something along these lines (I just skimmed this thread!).
There's a very clear purpose behind all this AI training—at least, in my eyes. The aim is to fully automate surveillance, and especially the process of tracking someone through a breadcrumb trail of photos.
Consider the Captchas that test you on pictures of cars, planes, and bikes. Now, consider you're a detective trying to catch a criminal, and you have 6000 photos available from the suspect's iCloud account (as well as the accounts of his friends & family, which they handed over voluntarily). You have several witness reports that, at the scene of the crime, the criminal was on a bike. Rather than going through each picture individually, you have an AI check for bikes and provide you all the photos. Apple already has this kind of tech built into iOS.
Now, imagine our hypothetical criminal is guilty of hate speech.
Is it fun yet?
How's that batshit? Honestly seems like the logical next step after ditching their whole "don't be evil" mantra.heard recently that google was selling captcha data to the pentagon to train autonomous military drones. sounds pretty batshit now that i type it out but has anyone else heard about this/have any info on it?
View attachment 38763
this bullshit is so obvious it's by an AI, look at bottom right, middle left, turtle, middle right, top left, middle top, FUCKING EVERY PICTURE
That article seems to be discussing two different things, one being the Captcha companies making money from their users by selling user data -- in this case, image identification -- to organizations doing machine learning for whatever purpose. The other is the bit they mentioned at the end, which is GAN, and I fear is not directly related to those weird images.New vice article on the issue: https://www.vice.com/en/article/xgwy5n/captcha-is-asking-users-to-identify-objects-that-dont-exist