FTC: Vast Surveillance of Users by Social Media and Video Streaming Companies
(ftc.gov)522 points by nabla9 10 months ago
522 points by nabla9 10 months ago
I agree. Let me tell you about what just happened to me. After a very public burnout and spiral, a friend rescued me and I took a part time gig helping a credit card processing company. About 2 months ago, the owner needed something done while I was out, and got their uber driver to send an email. They emailed the entire customer database, including bank accounts, socials, names, addresses, finance data, to a single customer. When I found out, (was kept hidden from me for 11 days) I said "This is a big deal, here are all the remediations and besides PCI we have 45 days by law to notify affected customers." The owner said "we aren't going to do that", and thus I had to turn in my resignation and am now unemployed again.
So me trying to do the right thing, am now scrambling for work, while the offender pretends nothing happened while potentially violating the entire customer base, and will likely suffer no penalty unless I report it to PCI, which I would get no reward for.
Why is it everywhere I go management is always doing shady stuff. I just want to do linuxy/datacentery things for someone who's honest... /cry
My mega side project isn't close enough to do a premature launch yet. Despite my entire plan being to forgo VC/investors, I'm now considering compromising.
>Why is it everywhere I go management is always doing shady stuff.
Well here's a cynical take on this - management is playing the business game at a higher level than you. "Shady stuff" is the natural outcome of profit motivation. Our society is fundamentally corrupt. It is designed to use the power of coercive force to protect the rights and possessions of the rich against the threat of violence by the poor. The only way to engage with it AND keep your hands clean is to be in a position that lets you blind yourself to the problem. At the end of the day, we are all still complicit in enabling slave labor and are beneficiaries of policies that harm the poor and our environment in order to enrich our lives.
>unless I report it to PCI, which I would get no reward for.
You may be looking at that backwards. Unless you report it to PCI, you are still complicit in the mishandling of the breach, even though you resigned. You might have been better off reporting it over the owner's objections, then claiming whistleblower protections if they tried to terminate you.
This is not legal advice, I am not a lawyer, I am not your lawyer, etc.
I did verify with an attorney that since I wasn't involved and made sure the owner knew what was what, that I had no legal obligations to disclose.
The problem isn't society or profit motivation. It's people. Humanity itself is corrupt. There aren't "good people" and "bad people". There's only "bad people." We're all bad people, just some of us are more comfortable with our corruption being visible to others to a higher degree.
The DOJ has just launched a corporate whistleblower program, you should look into it maybe it covers your case:
https://www.justice.gov/criminal/criminal-division-corporate...
>As described in more detail in the program guidance, the information must relate to one of the following areas: (1) certain crimes involving financial institutions, from traditional banks to cryptocurrency businesses; (2) foreign corruption involving misconduct by companies; (3) domestic corruption involving misconduct by companies; or (4) health care fraud schemes involving private insurance plans.
>If the information a whistleblower submits results in a successful prosecution that includes criminal or civil forfeiture, the whistleblower may be eligible to receive an award of a percentage of the forfeited assets, depending on considerations set out in the program guidance. If you have information to report, please fill out the intake form below and submit your information via CorporateWhistleblower@usdoj.gov. Submissions are confidential to the fullest extent of the law.
Why would you resign? You could have reported it yourself and then you would have whistleblower protections - if the company retaliated against you (e.g. fired you), you then would have had a strong lawsuit.
Because I don't want to be associated with companies that break the law and violate regulations knowingly. I've long had a reputation of integrity, and it's one of the few things I have left having almost nothing else.
Yes. The owner is old, and going blind, but refuses to sell or hand over day to day ops to someone else, and thus must ask for help on almost everything. I even pulled on my network to find a big processor with a good reputation to buy the company, but after constant delays and excuses for not engaging with them, I realized to the owner the business is both their "baby" and their social life, neither of which they want to lose.
YMMV, but it took me 15 minutes start to finish to freeze my credit with the 3 bureaus using the following instructions.
https://www.nerdwallet.com/article/finance/how-to-freeze-cre...
YMMV indeed.
Since moving overseas 15 years ago, I tried numerous times and it simply is not possible. All the forms require a U.S. mailing address to register. Same for online access to your Social Security account.
There are an estimated 10 million Americans living overseas. Taken together, we are the equivalent of the 11th largest state. All of us completely blind to what is happening with our credit record and Social Security account.
At this point I think the only way this gets fixed is massive fraud/exploitation by organized crime, so these organizations finally address the problem.
> There are an estimated 10 million Americans living overseas
Curious how you found this number, have a source?
This made me pretty curious, but I couldn't find any official numbers. The closest 'official' numbers that I could find are from the Federal Voting Assistance Program [0] and that lists 4.4 million people, but only 2.8 million of those being adults.
[0] https://www.fvap.gov/info/interactive-data-center/overseas
Strange that someone down-voted you, as this is a fair question.
> Curious how you found this number, have a source?
I don't have the source handy but have seen the estimated 10 million figure cited repeatedly. But maybe it is about a million too high, as the US Department of State estimates nine million in this 2020 publication: https://travel.state.gov/content/dam/travel/CA-By-the-Number...
This Wikipedia page has a lot more info for those interested: https://en.wikipedia.org/wiki/Emigration_from_the_United_Sta...
Using FVAP stats to me seems problematic, because just like the general population, many US citizens do not bother registering to vote (though they do acknowledge this on the page you linked to and try to control for it).
State likely have a more accurate estimate from knowing how many passport renewals originate from overseas addresses. I am sure some Americans renew or replace their passports while merely travelling overseas, but I cannot imagine this is a routine practice.
Unfortunately, that isn’t enough to mitigate identity theft. Someone leveraging the recent National Public Data breach opened a checking and savings account using my identity (no credit checks are performed in doing so) then committed wire fraud using accounts.
Banks use various other services such as Early Warning. Still, it's absurd the lengths we need to go to for any level of assurance against fraud.
> To me, what's missing from that set of recommendations is some method to increase the liability of companies who mishandle user data.
As nice as this is on paper, it will never happen, lobbyist exists. Not to be tinfoil hat but why would any lawmaker slap the hand that feeds them.
Until there is an independent governing body which is permitted to regulate over the tech industry as a whole it wont happen. Consider the FDA, they decide which drugs and ingredients are allowed and that's all fine. There could be a regulating body which could determine the risk to people's mental health for example from 'features' of tech companies etc. But getting that body created will require a tragedy. Like why the FDA was created in the first place. [1]
That's just my 2cents.
1 : https://www.fda.gov/about-fda/fda-history/milestones-us-food....
>There could be a regulating body which could determine the risk to people's mental health for example from 'features' of tech companies etc.
I think ideas like this is why it's not going to happen.
Our understanding of mental health is garbage. Psychiatry used to be full of quackery and very well still might be. Treatment for something like depression boils down to "let's try drug in a random order until one works". It's a field where a coin-flip rivals the accuracy of studies. Therefore any regulating body on that will just be political. It will be all about the regulators "doing something" because somebody wrote enough articles (propaganda).
Problems like this are why people aren't interested in supporting such endeavors.
That is not the treatment for depression.
this argument reduces mental health to medication, which leaves aside everything from the history of mental health (asylums, witch burnings to today), leaps in medicine (from lobotomies, to SNRIs, bipolar meds and more), to simply better diagnoses.
There are certainly tons of people here who have benefited from mental health professionals - overextending the flaws in psych simply to dismiss the idea of a watchdog is several unsupported arguments too far.
Psychiatry is useful in the way Statistics is useful for math models we don't fully understand. Statistics let's us get at answers with enough data even though we don't really understand the underlying model at play.
There a whole host of 'sciences' that are kind of 2nd tier like this, Psychiatry being one of them. Once we understand enough Neuroscience, it's likely to me Psychiatry will get consumed by Neuroscience which will splinter into more useful for day to day life categories as it grows (like a psychiatrist)
Super book on the subject and also talks about the rising bar for individual culpability as we understand more about the brain: https://www.amazon.com/Incognito-Secret-Lives-David-Eagleman...
Through civil disobedience is the only way stuff like this happens in America. You're right about the incentives to those in power, but how do you think we got emancipation2, women's suffrage, organized labor rights, prohibition and the end to prohibition?
The prohibitionisn't over. The war on drugs is still going strong, even with marijuana legalization in many states.
I’m completely sympathetic to making companies more liable for data security. However, until data breaches regularly lead to severe outcomes for subjects whose personal data was leaked, and those outcomes can be causally linked to the breaches in an indisputable manner, it seems unlikely for such legislation to be passed.
I forgot where I saw this, but the US govt recently announced that they see mass PII theft as a legitimate national security issue.
It’s not just that you or I will be inconvenienced with a bit more fraud or email spam, but rather that large nation state adversaries having huge volumes of data on the whole population can be a significant strategic advantage
And so far we typically see email+password+ssn be the worst data leaked; I expect attackers will put in more effort to get better data where possible. Images, messages, gps locations, etc
yes, privacy is not an individual problem; it's a civil defense problem, and not just when your opponent is a nation-state. we already saw this in 02015 during the daesh capture of mosul; here's the entry from my bookmarks file:
https://www.facebook.com/dwight.crow/media_set?set=a.1010475... “#Weaponry and morale determine outcomes. The 2nd largest city of Iraq (Mosul) fell when 1k ISIS fighters attacked “60k” Iraqi army. 40k soldiers were artifacts of embezzlement, and of 20k real only 1.5k fought - these mostly the AK47 armed local police. An AK47 loses to a 12.7mm machine gun and armored suicide vehicle bombs. Finally, the attack was personal - soldiers received calls mid-fight threatening relatives by name and address. One army captain did not leave quickly enough and had two teenage sons executed.” #violence #Iraq #daesh
of course the americans used this kind of personalized approach extensively in afghanistan, and the israelis are using it today in lebanon and gaza, and while it hasn't been as successful as they hoped in gaza, hamas doesn't exactly seem to be winning either. it's an asymmetric weapon which will cripple "developed" countries with their extensive databases of personal information
why would a politician go to war in the first place if the adversary has the photos and imeis of their spouse, siblings, and children, so they have a good chance of knowing where they are at all times, and the politician can't hope to protect them all from targeted assassination?
the policy changes needed to defend against this kind of attack are far too extreme to be politically viable. they need to be effective at preventing the mere existence of databases like facebook's social graph and 'the work number', even in the hands of the government. many more digital pearl harbors like the one we saw this week in lebanon will therefore ensue; countries with facebook, credit bureaus, and national identity cards are inevitably defenseless
imposing liability on companies whose data is stolen is a completely ineffective measure. first, there's no point in punishing people for things they can't prevent; databases are going to get stolen if they're in a computer. second, the damage done even at a personal level can vastly exceed the recoverable assets of the company that accumulated the database. third, if a company's database leaking got your government overthrown by the zetas or daesh, what court are you going to sue the company in? one operated by the new government?
Perhaps you're not aware of https://en.wikipedia.org/wiki/Office_of_Personnel_Management...
The expansion of KYC and the hegemonic dominance of our global financial intelligence network is a recent infringement on our privacy that would not necessarily pass popular muster if it became well-known.
Most of our population is still living in a headspace where transactions are effectively private and untraceable, from the cash era, and has not considered all the ways that the end of this system makes them potential prey.
The fact is that the market is demanding a way to identify you both publicly and privately, and it will use whatever it needs to, including something fragile like a telephone number 2fa where you have no recourse when something goes wrong. It's already got a covert file on you a mile long, far more detailed than anything the intelligence agencies have bothered putting together. The political manifestation of anti-ID libertarians is wildly off base.
The concern about organizations and the governments feelings that it needs to track you is a very valid concern. Why does the government need to make sure your "hand job from a friend" venmo payment to your friend is "legally legit"? (You can get transactions flagged for this and the moderator will shame you)
Are you correct in what's going on? Yes. Are we placed in this with no option to resist? For the most part yes.
Because they're the mark of the beast or a step towards fascism or something.
I don't think it would take much to convert real IDs into a national ID, they are as close to as they can get without "freaking people out".
Emphasizing that the number can be changed would really help there.
People could even generate their own number (private key), which they never gave out, and appeared differently to each account manager verifying it, and still replace them.
When you choose your own number, it's only the Mark of the Beast if you are the Beast! * **
* 666, 13, 69 and 5318008 expressly prohibited.
** Our offices only provide temporary tattoos.
Shared secrets are criminally negligent security architecture in 2024. We can authenticate identity and authorize payment without giving the relying party a token to leak or abuse. The energy behind this problem is good, but "everyone try harder to protect the shared secrets entrusted to you" would be a tragic waste of it.
> [...] would be a tragic waste of it.
The first time would have been a tragedy, from then on it has been farce after farce.
Imagine a world where companies would have to prove the necessity of storing specific factoids. It would only take 1 security researcher to prove it being unnecessary, invalidating that class of "legitimate interests".
Today this value judgement happens in human brains, like the (correct) judgement in your comment. If we want to scale it objectively we would have to switch to formal verification. A whole industry of compliance checking could come to exist where a company wants to get its operations screened for compliance issues, so as not to suffer criminal negligence penalties.
The problem here is the payments industry (continuing to issue and accept "credit card numbers") and the voters (refusing to authorize a proper national ID). An individual entity that has to conduct business under these circumstances has no real alternative.
You are not being harmed by the storage or leakage of a few bytes, that's ridiculous. You are being harmed by the financial industry and government's insistence that knowledge of these bytes is sufficient to take your property or hold a debt against you.
Regulation is key, but I don’t see it as likely when our society is poisoned by culture war bs. Once we put that behind us (currently unlikely), we can pass sane laws reigning in huge corporations.
I get a feeling that liability is the missing piece in a lot of these issues. Section 230? Liability. Protection of personal data? Liability. Minors viewing porn? Liability.
Lack of liability is screwing up the incentive structure.
I think I agree, but people will have very different views on where liability should fall, and whether there is a malicious / negligent / no-fault model?
Section 230? Is it the platform or the originating user that's liable?
Protection of personal data? Is there a standard of care beyond which liability lapses (e.g. a nation state supply chain attack exfiltrates encrypted data and keys are broken due to novel quantum attack)?
Minors viewing porn? Is it the parents, the ISP, the distributor, or the creator that's liable?
I'm not here to argue specific answers, just saying that everyone will agree liability would fix this, and few will agree on who should be liable for what.
It's not a solvable problem. Like most tech problems it's political, not technical. There is no way to balance the competing demands of privacy, security, legality, and corporate overreach.
It might be solvable with some kind of ID escrow, where an independent international agency managed ID as a not-for-profit service. Users would have a unique biometrically-tagged ID, ID confirmation would be handled by the agency, ID and user behaviour tracking would be disallowed by default and only allowed under strictly monitored conditions, and law enforcement requests would go through strict vetting.
It's not hard to see why that will never happen in today's world.
>Protection of personal data? Is there a standard of care beyond which liability lapses (e.g. a nation state supply chain attack exfiltrates encrypted data and keys are broken due to novel quantum attack)?
There absolutely should be, especially for personal data collected and stored without the express written consent of those being surveilled. They should have to get people to sign off on the risks of having their personal data collected and stored, be legally prevented from collecting and storing the personal data of people who haven't consented and/or be liable for any leaking or unlawful sharing/selling of this data.
If you aren’t directly harmed yet what liability would they have? I imagine if your identity is stolen and it can be tied to a breach then they would already be liable.
The fact that my data can be stolen in the first place is already outrageous, because I neither consented to allowing these companies to have my data, nor benefit from them having my data.
It's like if you go to an AirBNB and the owner sneaks in at night and takes photos of you sleeping naked and keeps those photos in a folder on his bookshelf. Would you be okay with that? If you're not directly harmed, what liability would they have?
Personal data should be radioactive. Any company retaining it better have a damn good reason, and if not then their company should be burned to the ground and the owners clapped in irons. And before anyone asks, "personalized advertisements" is not a good reason.
> before anyone asks, "personalized advertisements" is not a good reason
The good reason is growth. Our AI sector is based on, in large part, the fruits of these data. Maybe it's all baloney, I don't know. But those are jobs, investment and taxes that e.g. Europe has skipped out on that America and China are capitalising on.
My point, by the way, isn't pro surveillance. I enjoy my privacy. But blanket labelling personal data as radioactive doesn't seem to have any benefit to it outside emotional comfort. Instead, we need to do a better job of specifying which data are harmful to accumulate and why. SSNs are obviously not an issue. Data that can be used to target e.g. election misinformation are.
I mean it's pretty clear that you are directly harmed if someone takes naked photos of you without your knowledge or consent and then keeps them. It's not a good analogy so if we want to convince people like the GP of the points you're making, you need to make a good case because that is not how the law is currently structured. "I don't like ads" is not a good reason, and comments like this that are seething with rage and hyperbole don't convince anyone of anything.
>I neither consented to allowing these companies to have my data, nor benefit from them having my data.
I think both of those are debatable.
I don't think thats a proper parallel.
I think a better example would be You (AirBnB Host) rent a house to Person and Person loses the house key. Later on (perhaps many years later), You are robbed. Does Person have liability for the robbery?
Of course it also gets really muddy because you'll have renting the house out for those years and during that time many people will have lost keys. So does liability get divided? Is it the most recent lost key?
Personally, I think it should just be some statutory damages of probably a very small amount per piece of data.
This is the traditional way of thinking, and a good question, but it is not the only way.
An able bodied person can fully make complaints against any business that fails their Americans with Disabilities Act obligation. In fact these complaints by able bodied well-doers is the de facto enforcement mechanism even though these people can never suffer damage from that failure.
The answer is simply to legislate the liability into existence.
That's the whole problem with "liability", isn't it? If the harms you do are diffuse enough then nobody can sue you!
The same way you can get ticketed for speeding in your car despite not actually hitting anyone or anything.
This is exactly why thinking of it in terms of individual cases of actual harm, as Americans have been conditioned to do by default, is precisely the wrong way to think about it. We're all familiar with the phrase "an ounce of prevention is worth a pound of cure", right?
It's better to to think of it in terms of prevention. This fits into a category of things where we know they create a disproportionate risk of harm, and we therefore decide that the behavior just shouldn't be allowed in the first place. This is why there are building codes that don't allow certain ways of doing the plumbing that tend to lead to increased risk of raw sewage flowing into living spaces. The point isn't to punish people for getting poop water all over someone's nice clean carpet; the point is to keep the poop water from soaking the carpet in the first place.
Safety rules are written in blood. After a disaster there’s a push to regulate. After enough years we only see the costs of the rules and not the prevented injuries and damage. The safety regulations are then considered annoying and burdensome to businesses. Rules are repealed or left unenforced. There is another disaster…
Tangentially, there was an internet kerfuffle about someone getting in trouble for having flower planters hanging out the window of their Manhattan high rise apartment a while back, and people's responses really struck me.
People from less dense areas generally saw this as draconian nanny state absurdity. People who had spent time living in dense urban areas with high rise residential buildings, on the other hand, were more likely to think, "Yeah, duh, this rule makes perfect sense."
Similarly, I've noticed that my fellow data scientists are MUCH less likely to have social media accounts. I'd like to think it's because we are more likely to understand the kinds of harm that are possible with this kind of data collection, and just how irreparable that harm can be.
Perhaps Americans are less likely to support Europe-style privacy rules than Europeans are because Americans are less likely than Europeans to know people who saw first-hand some of what was happening in Europe in the 20th century.
Behind the ball by 15 years to start taking this seriously and beginning to think about pushing back, but better late than never.
Next please reign in the CRAs.
I think Snowden was bang on when in 2013 he warned us of a last chance to fight for some basic digital privacy rights. I think there was a cultural window there which has now closed.
Snowden pointed and everyone looked at his finger. It was a huge shame, but a cultural sign that the US is descending into a surveillance hell hole and people are ok with that. As someone who was (and still is) vehemently against PRISM and NSLs and all that, it was hard to come to terms with. I'm going to keep building things that circumvent the "empire" and hope people start caring eventually.
> and people are ok with that
I've seen no evidence of this. People mostly either don't understand it for feel powerless against it.
>and people are ok with that.
All the propagandists said he was a Russian asset, as if even if that were true, it somehow negated the fact that we were now living under a surveillance state.
>Snowden pointed and everyone looked at his finger.
This is a great way of putting it.
Over ten years ago I wrote about the root of the problem: https://magarshak.com/blog/?p=169
And here is a libertarian solution: https://qbix.com/blog/2019/03/08/how-qbix-platform-can-chang...
It makes me irrationally angry that I suddenly started getting spam emails from Experian. Like motherfucker I never consented for you to have my data, then you leak it all, now you're sending me unsolicited junk email? It's just such bullshit that I'm literally forced to have a relationship with these companies to freeze my credit or else I'm at the mercy of whoever they decide to release my information to without my authorization.
Yep. It sucks. Zero consequences of any import for those companies as far as I'm aware too. Tiny fines end up being "cost of doing business". Then they get to externalize their failures onto us by using terms like "Identity Theft", which indicates something was stolen from ME and is now MY problem.
In actuality some not-well-maintained systems owned by <corp> were hacked or exposed or someone perpetrated fraud on a financial institution and happened to use information that identifies me. It's really backwards.
PSA: If you haven't already, go freeze your credit at Experian, TransUnion, Equifax and Innovis. It will make the perpetration of this type of fraud much more difficult for adversaries.
PSA pro tip: they will try to steer you toward “locking” your account. Don’t fall for it. Freeze your account.
Innovis? That's new to me. How long till they spin up yet another credit check corporation that I have no choice but to involve in my life?
I was also informed you can freeze opening checking accounts here.
My pet solution has been to make the credit reporters liable for transmitting false information to the CRAs.
Chase tells Experian I opened a new line of credit with them, but it later is demonstrated that it was a scammer with my SSN? Congratulations, $5,000 fine.
Of course this all gets priced in to the cost and availability of consumer credit. Good! Now the lenders have an incentive to drive those costs down (cheaper, better identity verification) to compete.
The solution is much simpler. Put all of the consequences of being defrauded by a borrower onto the lender.
If a lender wants to be repaid, then they need to show the borrower all the evidence they have for proof that the borrower entered into the contract.
If all a lender has is the fact that a 9 digit number, date of birth, name, and address were entered online, then the borrower simply has to say “I did not enter that information”, and the lender can go pound sand.
Guarantee all the lenders will tighten up their operations very quickly, and consequently, so will the loans that appear on one’s credit report.
For a while they were sending emails about my account that I was actually unable to unsubscribe from[1]. I knew it was illegal at the time, and when I finally noticed an unsubscribe button it was because the FTC finally intervened[2].
https://www.reddit.com/r/assholedesign/comments/udy8rz/exper...
https://www.ftc.gov/news-events/news/press-releases/2023/08/...
Ever stop and think it's funny that Meta, Google, etc. are worth billions because they figured out how to legally fill a database with information about you? In any other time in history some might call it spying, but well they figured out how to do it legally, and it's worth billions. Meanwhile from a technical standpoint, remotely logging your data is a trivial thing, with consent of course. It's like, we made this imaginary wall (law) and spent billions building a road around that wall, and thats equivalent to econmic prosperity. Similar idea with streaming services versus file sharing.
Spying is done without consent.
Why do people keep saying social media is just a database?
The consent you give to web services isn't much better than if an electrician said "hey, can you tap this button to give me consent to work on your house?" and then installed undetectable hidden microphones inside every surface in your apartment.
All of the UX of online consent forms exists to misinform, trick, and get users used to agreeing to sell their digital soul.
Facebook will create shadow profiles of you even if you've never signed up, never created a profile. They'll take your number from other people's contacts via WhatsApp. They'll do facial scans of you on photos other people upload.
Even if you've never visited their site.
Where's the consent there?
I wouldn't say that. If it meets our needs/wants and we are willing to pay for it, that represents value, no matter how silly it sounds. People pay money for plenty of nonsense: cosmetics, junk food, DLC. The fact that it's artificially derived (laws begetting paid workarounds) doesn't change the value proposition. For data gluttons the investment in data acquisition pays off. There are plenty of people paying money to work around laws, especially tax laws. Tax decisions have a scale from wisdom (an individual making prudent financial decisions) to deviance (a company playing shell games with businesses and bank accounts) but the line can be blurry.
I do think the situation is dystopian though. Sharing data without explicit case-by-case consent should be disallowed.
I think this effort is positive, but a bit misdirected. Think data breach liability. Facebook and YouTube are willing and capable defenders of sensitive customer data. Watch the AshleyMadison documentary. Arrogant disregard for customer privacy and almost no culpability. These smaller, irresponsible players are where consumers are most vulnerable.
Agreed. Mid-sized/smaller players are the places which have very poor data & security practices. Especially when they require PII as part of their operations.
Meta, Google are much better stewards of their users data. One misconception I see is claiming these companies sell user data. I'd instead say that they sell user attention.
They don’t sell user data for a very simple reason - it’s a crappy business, as you can charge much much more with to recurring sales for heavily obfuscated access to the data than just one off selling of said data.
When you think about it - initiatives are kind of aligned with user privacy (kind of, as there’s much more to the story than this simplistic point of view)
I will be surprised if she's still there six months from now. Trump will remove her if he becomes president; whereas if Harris wins, and the GOP take the Senate--a pretty likely scenario--I fear Harris won't hesitate to use Khan as a bargaining chip to gain confirmation of her appointments.
See also on this topic:
>Two billionaire Harris donors hope she will fire FTC Chair Lina Khan
https://www.reuters.com/world/us/two-billionaire-harris-dono...
>Kamala Harris’ Donors Privately Urge Firing of FTC’s Khan, SEC’s Gensler
https://www.bloomberg.com/news/articles/2024-09-06/kamala-ha...
2016 Schneier on Security "Data is a Toxic Asset": https://www.schneier.com/blog/archives/2016/03/data_is_a_tox...
It would be wonderful if the staff report recommendations were taken seriously by our legislators. I think I'll send a copy of this to my reps and say hi.
the full report[0] is a good read don't just read the summary..
>>> But these findings should not be viewed in isolation. They stem from a business model that varies little across these nine firms – harvesting data for targeted advertising, algorithm design, and sales to third parties. With few meaningful guardrails, companies are incentivized to develop ever-more invasive methods of collection. >>>
[0]: https://www.ftc.gov/system/files/ftc_gov/pdf/Social-Media-6b...
Surveillance is cancerous. It keeps on growing, feeding on justification for every data point "just because", and then eventually it kills you.
> The report found that the companies collected and could indefinitely retain troves of data, including information from data brokers, and about both users and non-users of their platforms.
As a non-user of many social media platforms, is there anything I can do to prevent companies from collecting data about me? It feels wrong that companies you do not sign up for are still finding and processing data about you.
This is truly interesting from a dialectical perspective. The current narrative is that data is simultaneously infinitely valuable and presents zero liability. This contradiction can't hold forever (though it can hold longer than any of us are alive, of course)
I suspect it will break in the direction of the narrative that "data wasn't that valuable anyway", regardless of how disingenuous this sentiment is. Nothing else preserves the economic machine while simultaneously dismissing the concerns of consumers. Perhaps we'll get special protection for stuff like SSNs to make it seem like politicians are acting on the behalf of their constituents (even though a competent manager of a rational society would simply ban use of ssn as a form of identification as this is basically public information.
Far from me to try to defend platforms, but I am still wondering (for years now) :
How are data deletion requests supposed to be handled in practice, when the only way to be sure is to physically destroy the hardware that data was stored on ? (Especially the case for transistor-based storage, and even more so when wear leveling is being used.)
Or is this is actually a "pinky promise" by the company to not restore the data (or else they will have to face legal consequences) ?
A little hypocritical when it comes from various government organizations all over the western world. Surveillance companies are essential for police to be able to easily gather data when needed fast. It is a happy accident that surveillance is so lucrative for advertising and also so effective for policing.
Different parts of government might disagree on the best course of action but I wouldn’t call that disagreement hypocrisy per se.
It’s also not true that it’s an irresolvable conflict. Yes the cops can and do buy your phone location data, but even if we said that was fine and should continue, that doesn’t also mean that any schmuck should be able to buy real-time Supreme Court justice location data from a broker.
See also: "How advertisers became NSA's best friend"[1].
[1]https://www.theverge.com/2013/12/12/5204196/how-advertisers-...
I was only commenting on the publication date. HN tends to give weight to that.
I'm not making any other implication.
Assuming it’s on a computer (big assumption for kids) you can install this[0] extension and customize it to do things like remove shorts from appearing, disable autoplay, hide recommended videos, etc. it’s a good way to not let YouTube pull your focus away from you.
from the report: "While the Order did not explicitly request that the Companies report all the types of Personal Information collected.." Why wouldn't they ask for all the personal information that they collect? Can anyone explain this?
We really need e2ee social media that's designed to protect, not addict people.
Sure it is: https://peergos.org/posts/decentralized-social-media
It is social media where only the end users' devices can decrypt the posts and comments. Then surveillance is not possible. Targeted ads are not possible.
There are pretending to take this topic seriously because there is an election coming up. Your government is in bed with big tech, no one is coming to save you, everyone is on their own, expect no quarter.
Facebook Employees Explain Struggling To Care About Company's Unethical Practices When Gig So Cushy https://www.youtube.com/watch?v=-DiBc1vkTig
I love the cognitive dissonance on display within the federal government.
One arm: "everyone is a criminal; spy on everyone"
Other arm: "hey you shouldn't really harvest all of that data"
The cognitive dissonance is in the voters and users.
Even right here on HN, where most people understand the issue, you'll see conversations and arguments in favor of letting companies vacuum up as much data and user info as they want (without consent or opt-in), while also saying it should be illegal for the government to collect the same data without a warrant.
In practice, the corporations and government have found the best of both worlds: https://www.wired.com/story/fbi-purchase-location-data-wray-... Profit for the corporation, legal user data for the government.
HN is filled with folks that wrote the code in question, or want to create similar products. And they hate to have it pointed out that these tools may cause harm so they thrash around and make excuses and point fingers. A tale as old as this site.
I often have to remind myself who hosts this board and that I am hanging out on a site for successful and aspiring techno-robber-barons.
>The cognitive dissonance is in the voters and users.
People really need to learn to say “NO” even if that means an inconvenience “Your personal information might be shared with our business partners for metrics and a customer tailored experience” no thanks, “what is your phone number? so I can give you 10% discount” no thanks, “cash or credit?” Cash, thanks, “login with google/ apple/ blood sample” no thanks
Anti-disclaimer: I'm not one of those folks.
However, that's not at all a cognitive dissonance. Fundamentally, there's a difference between governments and private companies, and it is fairly basic to have different rules for them. The government cannot impinge on free speech, but almost all companies do. The government cannot restrict religion, but to some extent, companies can. Etc.
Of course, in this case, it's understandable to argue that neither side should have that much data without consent. But it's also totally understandable to allow only the private company to do so.
There is fundamentally a difference between corporations and the government, but it's still a cognitive dissonance. These aren't the laws of physics - we chose to have different rules for the government and corporations in this case.
There are plenty of cases where the same rules apply to both the government and corporations.
There isn’t a single intellectually honest harm associated with the majority of app telemetry and for almost all ad data collection. Like go ahead and name one.
Once you say some vague demographic and bodily autonomy stuff: you know, if you’re going to invoke “voters,” I’ve got bad news for you. Some kinds of hate are popular. So you can’t pick and choose what popular stuff is good or what popular stuff is bad. It has to be by some objective criteria.
Anyway, I disagree with your assessment of the popular position anyway. I don’t think there is really that much cognitive dissonance among voters at all. People are sort of right to not care. The FTC’s position is really unpopular, when framed in the intellectually honest way as it is in the EU, “here is the price of the web service if you opt out of ads and targeting.”
You also have to decide if ad prices should go up or down, and think deeply: do you want a world where ad inventory is expensive? It is an escape valve for very powerful networks. Your favorite political causes like reducing fossil fuel use and bodily autonomy benefit from paid traffic all the same as selling junk. The young beloved members of Congress innovate in paid Meta campaign traffic. And maybe you run a startup or work for one, and you want to compete against the vast portfolio of products the network owners now sell. There’s a little bit of a chance with paid traffic but none if you expect to play by organic content creation rules: it’s the same thing, but you are transferring money via meaningless labor of making viral content instead of focusing on your cause or business. And anyway, TikTok could always choose to not show your video for any reason.
The intellectual framework against ad telemetry is really, really weak. The FTC saying it doesn’t change that.
> There isn’t a single intellectually honest harm associated with the majority of app telemetry and for almost all ad data collection. Like go ahead and name one.
You’ve already signaled that you’re ready and willing to dismiss any of the many obvious reasons why this is bad. But let’s flip it. What intellectually honest reason do you have for why it would be wrong if I’m watching you while you sleep? If I inventory your house while you’re away, and sell this information to the highest bidder? No bad intentions of course on my part, these things are just my harmless hobby and how I put bread on the table.
In my experience literally everyone who argues that we don’t really have a need for privacy, or that concerns about it are paranoid or that there’s no “real” threat.. well those people still want their own privacy, they just don’t respect anyone else’s.
More to the point though, no one needs to give you an “intellectually honest” reason that they don’t want to be spied on, and they don’t need to demonstrate bad intentions or realistic capabilities of the adversary, etc. If someone threatens to shoot you, charges won’t be dropped because the person doesn’t have a gun. The threat is extremely problematic and damaging in itself, regardless of how we rank that persons ability to follow through with their stated intent.
> There isn’t a single intellectually honest harm associated with the majority of app telemetry and for almost all ad data collection. Like go ahead and name one.
The harm is the privacy violation. App telemetry needs to be "opt-in", and people should know who can see the data and how it's being used.
The intelligence agencies literally use ad data to do "targeted killing" what are you even talking about?
Ex-NSA Chief: 'We Kill People Based on Metadata'...
Not everyone but almost... and it's the same in other places (was already the case in Buenos Aires when I went there a few years ago). And of course when you tell people that there are better alternatives, many of them don't want "another app"... (but then they install one full of trackers to hope get some kind of prize at the local supermarket).
It isn’t cognitive dissonance, the state does lots of things we’re not supposed to do. Like we’re not supposed to kill people, but they have whole departments built around the task.
Should the state do surveillance? Maybe some? Probably less? But the hypocrisy isn’t the problem, the overreach is.
The FTC is under the president's authority. This is election pandering, same as Zuckerberg's backpedaling on government censorship.
This is for getting votes from the undecided.
Everything will be back to normal (surveillance, data collection and censorship) after the election.
I don't know if you've been watching but the FTC has actually been extremely proactive during this cycle. Lina Khan is an excellent steward and has pushed for a lot of policy improvements that have been sorely needed - including the ban (currently suspended by a few judges) on non-competes.
It is disingenuous to accuse the FTC of election pandering when they've been doing stuff like this for the past four years consistently.
Begs the question of agency authority which is manifestly not resolved. You will find that the elections’ results will effect the eventual resolution of the question of the unitary executive quite dramatically.
The problem seems deeply fundamental to what it means to be a human.
On one hand, there's a lack of clear leadership, unifying the societal approach, on top of inherently different value systems held by those individuals.
It seems like increasingly, it's up to technologists, like ones who author our anti-surveillance tools, to create a free way forward.
this view presupposes the state as “just another actor” as opposed to a privileged one that can take actions that private actors can’t
In the matter of corporations vs governments, if you tally up number of people shot it's clear which of the two is more dangerous. You would think Europe of all regions would be quick to recognize this.
I don't like corporations spying on me, but it doesn't scare me nearly as much as the government doing it. In fact the principle risk from corporations keeping databases is giving the government something to snatch.
It seems entirely reasonable/consistent that we would allow some capabilities among publicly sanctioned, democratically legitimate actors while prohibiting private actors from doing the same.
In fact, many such things fall into that category.
"According to one estimate, some Teens may see as many as 1,260 ads per day.200 Children and Teens may be lured through these ads into making purchases or handing over personal information and other data via dark patterns"
There is a long trail of blood behind google and facebook, amazon... Etc...
Even with ad blockers, we still see tons of ads. Corporate news like CNN constantly has front page stories that are just paid promotion for some product or service wrapped in a thin veil of psuedo journalism. Product placement is everywhere too. Tons of reddit front page content is bot-upvoted content that is actually just a marketing campaign disguised as some TIL or meme or sappy story.
Blood??? Some kid spending their allowance on a shitty phone game does not make them bleed.
There is no single "the government".
Instead "The Government" is like a huge community. They are all supposed to adhere to the same code, but like any community there are those members that look for a way to bypass the law, without quite going over it.
That's what said purchases are. And even parts of the community in the same branch of a government department, may do what other parts are not even really aware of. Or agree with.
Although you have a valid point, I object to your calling it a community because communities don't have constitutions and cannot throw people in jail if they break the community's rules. Also, a community has much less control over who becomes a member of the community than a government has over who it employs.
> People criticize the clunky attempts by the EU to reign this in, and yes I agree the execution leaves much to be desired. It's still vastly better than the complete laissez-faire approach of the US authorities.
This is kind of weird as a response to a report by a US regulatory agency that is making specific policy requests for legislation to address this.
Apologies I was unclear: I'm not criticizing this report, I'm criticizing the lack of action over the past decade or so.
Simple questions:
Should ad prices be lower or higher?
Should YouTube be free for everyone, or should it cost money?
Having ads does not require mass surveillance --- that's really just something that social media companies have normalized because that's the particular business model and practices they have adopted and which makes them the most amount of money possible.
Well put. Targeting and more specifically retargeting is the problem.
Most companies can't afford to not do this when their competitors are. Hence the need for regulation.
Those are useful questions but I don’t think they’re the only ones that matter. Here’s another one for consideration:
What is the minimum level of privacy that a person should be entitled to, no matter their economic status?
If we just let the free market decide these questions for us, the results won’t be great. There are a lot of things which shouldn’t be for sale.
> What is the minimum level of privacy that a person should be entitled to, no matter their economic status?
This is an interesting question: maybe the truth is, very little.
I don't think that user-identified app telemetry is below that minimum level of privacy. Knowing what I know about ad tracking in Facebook before Apple removed app identifiers, I don't think any of that was below the minimum level.
This is a complex question for sort of historical reasons, like how privacy is meant to be a limit on government power as opposed to something like, what would be the impact if this piece of data were more widely known about me? We're talking about the latter but I think people feel very strongly about the former.
Anyway, I answered your questions. It's interesting that no one really wants to engage with the basic premise, do you want these services to be free or no? Is it easy to conceive that people never choose the paid version of the service? What proof do you need that normal people (1) understand the distinction between privacy as a barrier to government enforcement versus privacy as a notion of sensitive personal data (2) will almost always view themselves as safe from the government, probably rightly so, so they will almost always choose the free+ads version of any service, and just like they have been coming out ahead for the last 30 years, they are likely to keep coming out ahead, in this country?
The issue to me is that these companies have operated and continue to operate by obfuscating the nature of their surveillance to users. This isn’t a system of informed consent to surveillance in exchange for free services; it’s a system of duping ordinary people into giving up sensitive personal information by drawing them in with a free service. I’m almost certain this model could still exist without the surveillance. They could still run ads; the ads would be less targeted.
I didn’t mean to evade your questions, but my opinion is as follows:
Yes I want YouTube to be free, but not if that requires intrusive surveillance.
People who pay for YouTube aren’t opted out of surveillance as far as I can tell. So I reject the premise of your question, that people are choosing free because they don’t value privacy. They haven’t been given the choice in most cases.
On a tangential note, you previously asked if ads should be more expensive. It’s possible that ads should be less expensive, since they may be less effective than ad spend would suggest: https://freakonomics.com/podcast/does-advertising-actually-w...
To me, what's missing from that set of recommendations is some method to increase the liability of companies who mishandle user data.
It is insane to me that I can be notified via physical mail of months old data breaches, some of which contained my Social Security number, and that my only recourse is to set credit freezes from multiple credit bureaus.