Jump to content

FB - Facebook


biaggio

Recommended Posts

  • Replies 1.2k
  • Created
  • Last Reply

Top Posters In This Topic

People hate facebook, but I think ultimately people hate people.

 

I get junk mail from companies that I never contacted.

 

I got robocalls from companies that I never contacted. They spoof a number that is close to my own cell phone number (same zip, same 3 digit subset) despite having an out of state number from the left coast, yet they want to sell me a solar cell roof on my east coast house.

 

Google maps news where I park my car. I am pretty sure they would tell the police, if it comes down to that. They probably tell advertisers too.

 

By the way, I hate you guys  ;D

Link to comment
Share on other sites

Guest Schwab711

I don't understand the animosity.  If a person doesn't like the FB product they are free to not use it.  Unlike the credit ratings agencies. I don't have the option to opt out.  In fact, I am forced to pay to freeze my credit in order to help ensure my information doesn't get out.  Why was there not more outrage when Equifax tried to cover up that it lost personal information? 

 

 

That's not true. I do not have a facebook account, but they know who I am, who my friends are, what my phone number is, my email, etc. If any person has me as a contact and shares their contacts with Facebook to "find people they know", they hoover up my data and build a profile on me. Where can I opt out of that?

 

http://theconversation.com/shadow-profiles-facebook-knows-about-you-even-if-youre-not-on-facebook-94804

 

If any person has you as a contact, they can disclose this information to whole world in any way and you cannot do anything about it. They can share it to any site/service that asks for it. They can post it on their blog, webpage, etc. They can print 100000 business cards and drop them into NYC metro station at peak hour.

 

Well, maybe if you're famous you can sue. Good luck.

 

This is not a Facebook issue. It's an issue that info you gave to someone is not controlable.

 

Didn't people always have this problem in some form? Addresses and phone numbers have been published for at least a 100 years and probably farther back for addresses/business index. In the past, if you looked up a friend on a specific street, you would have the address/phone number to all of their neighbors. FB does it by friends instead of neighbors (maybe it's 'in addition to').

Link to comment
Share on other sites

I don't understand the animosity.  If a person doesn't like the FB product they are free to not use it.  Unlike the credit ratings agencies. I don't have the option to opt out.  In fact, I am forced to pay to freeze my credit in order to help ensure my information doesn't get out.  Why was there not more outrage when Equifax tried to cover up that it lost personal information? 

 

 

That's not true. I do not have a facebook account, but they know who I am, who my friends are, what my phone number is, my email, etc. If any person has me as a contact and shares their contacts with Facebook to "find people they know", they hoover up my data and build a profile on me. Where can I opt out of that?

 

http://theconversation.com/shadow-profiles-facebook-knows-about-you-even-if-youre-not-on-facebook-94804

 

Just out of curiosity do you get e-mails directly from Facebook trying to get you to join?  Do you get adds served to you from Facebook?  How is Facebook is profiting off knowing your e-mail address and your contacts? 

 

I'm not trying to say that its right, but I still don't think this is in the same ballpark as the credit ratings agencies selling personal information. 

 

If you have a land line phone, chances are you're in a phone book.  If you're on the grid you're in "the system".

Link to comment
Share on other sites

The difference is what FB holds itself out to be & is today.  The credit agencies & Amazon are focused on your ability & willingness to do commerce but FB claims & acts like it is more than this.  A platform to connect people which is a surveillance machine.  All the previous platforms to connect people (post office & telecos) did not take the data you generated & shared with others & sell it to the highest bidder.  The gov't has carefully regulated if & how these platforms could use your personal data.  In most cases, it was illegal to place a wiretap (collecting your phone data without your consent except by court order).  Now FB comes along and claims all your data is theirs to sell to whoever they want.  Now most folks in the US are fine with the collection & selling of data associated with commerce to firms who can solicit you.  It is pretty creepy but still legal within our system.  Where the issue comes in is for data that does not have a commercial purpose but more of a political or relational purpose.  There is no easy way to distinguish between these types of data in the surveillance machine FB has built thus the issue.  I really see no way out of FB's issue here short of either breaking up FB or the government regulating them based upon upon an RoI approach.  I see the break-up as the better path for shareholders as the latter would destroy quite abit of FB's value.

 

Packer

Link to comment
Share on other sites

The notion that FB is "selling your personal data" is just not correct. There are leaks, there are poorly crafted APIs, there may be internal APIs being accessed by bad actors, and there may be limited deals that allow Spotify to see the messages you draft and send and read in Spotify.app but none of this is the wholesale selling of data--something which, beyond being against Facebook's PR interest would simply be against their mechanical business interest.

 

So long as Facebook has the social graph (and nobody else does), Facebook captures the lion's share of everyone's online social activity. So long as they do that, they will probably crank out the best ARPUs in the industry. What interest do they have in revealing/commodifying/open-sourcing any substantail component of the social graph? In this case it is exactly their monopoly characteristic that, in my head, makes them "safe" here. As a monopoly, they fully internalize the economic benefits of the social graph, so they have no reason to trade off the annuity stream.

 

This is my problem with the NYT article--it doubles-down on and perpetuates the myth/simplification of what's going on with Facebook. Packer is right that with enough press like this, the government will come in and start passing many more lines of regulation.

 

The only thing is, as long as the dominant harm-narrative about Facebook is so grossly oversimplified (selling data!), it should be manageable for Facebook to capture the regulatory process, and steer it to an outcome that takes a sufficiently hard line on the most politically salient (and actually irrelevant) data issues, while still constructing an enormous compliance burden that any new would-be social media app will have to face.

Link to comment
Share on other sites

Johnny, let's say that you're right. Why are there poorly crafted APIs? Why can bad actors access internal APIs? Don't these guys have the brightest and the best programmers? So then why all this shoddy work?

 

The basic answer is that Facebook doesn't care. The best case scenario is gross negligence. This is actually a great case for regulation. The industry is not doing what needs to be done. So then you make them do it.

Link to comment
Share on other sites

Just wanted to quickly add a note about my general proposition that we're approaching a sort of delusional mass hysteria on the tech/privacy topic. I'm listening to the Commentary podcast, which is a politically conservative, jewish publication. These guys spend most of their time being very dismissive of almost any media outrage of any topic (caravans, embassies, russia stuff, etc.)

 

The editor just relayed an anecdote from a friend of his that is essentially as follows: she was at the grocery store, with her iPhone in her purse, and had a conversation with somebody at the checkout line about Product X. The very next day, on Amazon, she saw ads for product X. She then hypothesizes to the editor, who repeats, believingly, on the podcast that Siri (Apple) must have overheard her conversation, relayed actionable information to Amazon, and that Amazon served the ad.

 

He didn't tell the story in order to TEST the plausibility of it. He simply asserted it, said "this is horrific", and then another member of the podcast generally (without citation) reaffirmed it, saying something to the effect of, "oh yeah, there wa ssome guy who actually tested this and proved that it happens".

 

This is like 80-IQ level discourse being had on this topic, among people that consider them to be politically savvy, strategy-aware people.

 

https://overcast.fm/+GASFTQVww/28:17

 

Just listen for about two minutes to appreciate how there is zero skepticism about the most conspiratorial (and again, most engineering-oblivious) explanations of completely benign everyday experiences.

Link to comment
Share on other sites

Guest Schwab711

GOOG sends employees to drive around the country to collect your WiFi info and photograph your house/car/property from various angles. There's no opt-out unless you are the government or important. GOOG scans every communication you send through their servers (or that others send you through their servers) without the ability to opt-out. If you are sent an email from a gmail address unsolicited, you have been cataloged by GOOG. It's the exact same thing. More so, this is part of the business model of nearly every tech company. Most apps you download request access above and beyond what is needed to function. Every site you visit tracks you IP, device, and various hardware data. Many sites now require location to be turned on. If an app/site has access to your location, they may be able to continue to track your location after you leaving their app/site.

 

Telecoms have always sold your data (name, phone number, address, ect) to aggregating companies (credit bureaus, phone books, FICO, ect).

 

The post office has and does sell your data for advertising purposes.

 

The government failed to sufficiently regulate data rights/privacy and FB became aggressive as a result. This also goes back to cases about TOS in the early 2000's and what is a reasonable contract for people to agree to. It goes back to court cases on arbitration and is it fair to waive your rights to class-action remedy. This is a very complicated topic and FB is the posterboy because among unique personal data owned by companies, FB possesses the most granular data. There are and have been for 100+ years wholesale data brokers that sell to FB and others, a large portion of the basic information companies know about individuals. When you update your basic information with a company, the company's database becomes a little more valuable because it's slightly more accurate than the commodity data.

 

I'd argue that politics is an important type of commerce at this moment. We have court rulings that state that the transferring of money is a form of expression, no different than the clothing you wear or words your say. This has led to an explosion in advertising dollars spent. Specific to FB, FB's data (the unique data points collected by FB indexed with commodity/wholesale data) wins elections. It wins them overseas. It wins them in the US. Each passing election shows politicians love FB more than any other digital medium.

 

I don't think FB has any issue that 100's of other companies don't share, to Johnny's point.

Link to comment
Share on other sites

Johnny, let's say that you're right. Why are there poorly crafted APIs? Why can bad actors access internal APIs? Don't these guys have the brightest and the best programmers? So then why all this shoddy work?

 

The basic answer is that Facebook doesn't care. The best case scenario is gross negligence. This is actually a great case for regulation. The industry is not doing what needs to be done. So then you make them do it.

 

Since you're talking about gross negligence, what is the distinct observable harm you're referring to? Most of my commentary has been excluslively about the "Spotify can read your PMs" part of the NYT article. Honestly I haven't even read the rest of the article because I've been mostly preoccupied with running down what's going on in that one paragraph. In the case of the Spotify stuff, there's no tangible harm. Users opted in to allow Spotify to access their friends list so they could message their friends from Spotify. I suspect that once we start seeing the "reasonable person" or "reasonable expectation" standard refined a bit on these topics, we're going to land somewhere like "a reasonable person would expect that if they give a message to Spotify to deliver to their friend, that Spotify necessarily will have that message". That's just a hunch though, it could go somewhere else.

 

Again, I think I make the case that they do care, but that they're rather incompetent. They care because it is in their interest to manage these risks, or at least extract substantial value for taking them (they've done neither). I don't think the "best and brightest" stuff is anything but rhetorical. They are an enormous operation that hires thousands of new engineers a year. They hire the best and brightest they can find, which isn't necessarily saying much. In any case, its a fallacy to assume that because your organization is built of individuals of enormous capability that the organization or sub-corporate entities are going to necessarily share those traits. Programming is hard, coders take shortcuts, sometimes there are bugs or poorly documented features in a codebase your working with. There are a billion reasons that you can have data leaks that don't ultimately boil down to something like "They Just Don't Care". A Boeing just dropped out of the sky recently. Is this because the dipshit aeronautical engineers "didn't care" about safety? An F18 just slammed into a refueling tanker and killed half the people on both, who was guilty of not caring?

 

If we end up seeing very serious data leakage (as opposed to egregiously misreported partnerships ala Spotify), you're right that it will be used as a case for regulation. Again, my expectation is that this sort of problem is going to produce regulation that is probably neutral to the investment case, or perhaps slightly positive.

Link to comment
Share on other sites

Quite a bit of the FB data leaks to partners are from 2011-2017 timeframe. I'd say that yeah, they did not care much about it then. And it's quite possible that some of the access remained opened and unused after 2011. Some maybe even after 2017.

 

If I had to bet, I'd bet that some set of FB engineers just got their holidays f&^%ed and are sitting in the cold cubes closing the access that was still open.

 

The editor just relayed an anecdote from a friend of his that is essentially as follows: she was at the grocery store, with her iPhone in her purse, and had a conversation with somebody at the checkout line about Product X. The very next day, on Amazon, she saw ads for product X. She then hypothesizes to the editor, who repeats, believingly, on the podcast that Siri (Apple) must have overheard her conversation, relayed actionable information to Amazon, and that Amazon served the ad.

 

He didn't tell the story in order to TEST the plausibility of it. He simply asserted it, said "this is horrific", and then another member of the podcast generally (without citation) reaffirmed it, saying something to the effect of, "oh yeah, there wa ssome guy who actually tested this and proved that it happens".

 

would be just too funny if it was not sad.

 

Overall I agree with what Schwab711 is saying, so I won't spend too much airtime here.

 

 

Link to comment
Share on other sites

I want to rephrase what I said. I think the "cares/doesn't care" dichotomy is insufficiently explanatory. By far, the impression I get when talking/thinking this out with people closer to the ground is that the sense in engineering communities is essentially that privacy stopped actually existing like ten years ago--in other words the entire game is checkmated, so they're constantly surprised at backlash received over incremental steps that, from their perspective, in no way change the endgame. This does in fact read as "not caring", but I think it's closer to something like autism. They are what people think, but there is a legitimate obliviousness to how "regular people" think about these things, and how likely they are to misunderstand half of the things going on, and totally mistrust the rest.

 

I'm obviously guilty of that too. I have listened to that Commentary clip maybe 6 times in the past two hours, in total disbelief. This is, in cognitive terms, a clear empathy deficit.

 

I think the strength of the company is sufficient to see them through this, and I think they're going to be a lot savvier about how they navigate these waters. And again, most importantly, they're in a totally different competitive position now. They don't really need to push the envelope on anything. The hard task of building out the network is pretty much done, it's rent-collection time.

Link to comment
Share on other sites

I think you pretty much arrived at the point. I mean you had these guys in silicon valley - the engineering communities - that decided that privacy is dead. They did that because they saw an opportunity to make money and/or play with the cool toys. The reason why they are surprised at the backlash is because "regular people" didn't empower a bunch of nerds to make the decision that privacy is dead. So in the view of the regular people the engineering communities had no right to make that determination.

 

The people in that clip may be uninformed about how these companies really work. But what you can tell is their disapproval of the practices some companies use and fear and apprehension about how information could be used in the future. The naked truth is that these companies have been obfuscating like crazy how they use personal data they have in their possession. In some cases shamelessly. Facebook was about making the world better. Turns out their definition of a better world is a world where facebook has as much of your personal information to share it around. I have yet to meet a regular person that defines a better world in any way close to that.

 

Now the technical community may say that this is fait accompli and the regular people just have to take it. But the reality is anything but. The regular people can come together and tell the nerds to shove it. That's the real risk. It's not the cost of implementing regulations aka the cost of regulation. Facebook could easily afford that, whatever it may be. The real risk is that because of egregious past behavior (in my opinion there are more shoes to drop) regulations will significantly diminish the value of the information that facebook possesses and thus its earning potential. Regular people won't give a flyer is facebook doesn't get to make as much money as they would otherwise.

Link to comment
Share on other sites

It very much will depend on whether "regular people" will actually care or do anything. rb's narrative is as one sided as the "nerds" narrative. So far majority of "regular people" don't give a crap. And majority of young "regular people" either don't give a crap or think privacy is dead. It is possible that mainstream press article shitstorm will change the mind of "regular people", but it's not given or guaranteed. It's also possible that EU (and much more unlikely US) may go for heavy regulations even if "regular people" don't give a crap. We'll see.

 

Disclosure: I own positions in FB, GOOGL, Tencent and a bunch of other evil nerds .

Link to comment
Share on other sites

The regular people have given up privacy for convenience a long time ago.

 

Regular people are always retardedly reactionary. They don't care about privacy when it's convenient. Three years later when it becomes a hot topic, they are outraged. Not good for FB.

 

Disclosure: I own Tencent and GOOG.

Link to comment
Share on other sites

Johnny, let's say that you're right. Why are there poorly crafted APIs? Why can bad actors access internal APIs? Don't these guys have the brightest and the best programmers? So then why all this shoddy work?

 

The basic answer is that Facebook doesn't care. The best case scenario is gross negligence. This is actually a great case for regulation. The industry is not doing what needs to be done. So then you make them do it.

 

A highly regulated framework for data privacy and/or content censorship is likely to cement Facebook as a monopoly forever.  Only incumbents can afford to hire armies of content police and security experts.

Link to comment
Share on other sites

  • 2 weeks later...

On the "privacy regulations will probably help Facebook" thesis, this is an interesting article:

 

https://motherboard.vice.com/en_us/article/nepxbz/i-gave-a-bounty-hunter-300-dollars-located-phone-microbilt-zumigo-tmobile

 

If you think about the core "problems" these businesses are trying to solve with that data, it's pretty easy to imagine a Facebook-provided solution that is actually far more resilient to abuse, and an excited government trying to show it's doing a lot to get a handle on the risks are likely to just push the market towards monopoly.

 

I'm thinking specifically of, say, the credit card fraud detection case. It would be pretty trivially easy for FB to basically take in two arguments (person, location of card swipe) and just return a value of how sketchy FB thinks it is. Yea yea, creepy as fuck. But a monopoly doing Creepy Level 8 stuff is probably going to win against a fragmented market at Creepy Level 10.

Link to comment
Share on other sites

  • 2 weeks later...

https://www.revealnews.org/blog/a-judge-unsealed-a-trove-of-internal-facebook-documents-following-our-legal-action/

 

A glimpse into the soon-to-be-released records shows Facebook’s own employees worried they were bamboozling children who racked up hundreds, and sometimes even thousands, of dollars in game charges. And the company failed to provide an effective way for unsuspecting parents to dispute the massive charges, according to internal Facebook records.

 

In addition, there was this tidbit:

 

The judge agreed with Facebook’s request to keep some of the records sealed, saying certain records contained information that would cause the social media giant harm, outweighing the public benefit.

 

What was kept sealed you ask? Well, we obviously don't know for sure, but folks seem to think the following two articles flesh it out:

 

https://www.nytimes.com/2014/06/30/technology/facebook-tinkers-with-users-emotions-in-news-feed-experiment-stirring-outcry.html

 

But last week, Facebook revealed that it had manipulated the news feeds of over half a million randomly selected users to change the number of positive and negative posts they saw. It was part of a psychological study to examine how emotions can be spread on social media.

 

https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens

 

Facebook showed advertisers how it has the capacity to identify when teenagers feel “insecure”, “worthless” and “need a confidence boost”, according to a leaked documents based on research quietly conducted by the social network.

 

 

Link to comment
Share on other sites

You guys have probably seen references to some research suggesting that there's a figure like $1,000 that represents either the midpoint or the highpoint or the low point for what the average (US) Facebook user would need to be paid in order to give up FB for a year. Since I'd seen coverage of the article with all of those mutually exclusive interpretations, I thought I'd take a look at the source.

 

It's a reasonably clever study (set of studies, really) to try and tease out and test the data they get from each sub-study (none of which, independently, could be really relied on to make a case for anything).

 

The basic idea is that most of the representations of the study in the media I think are actually under-stating the case that the study, in total makes for the value of Facebook to users.

 

A few notes:

 

1. The basic study design was asking users to "bid" an amount at which they'd be willing to cease using FB. Some of these are bona fide transactions that involved cash transfers and some evidence of deactivation. In general the deactivation evidence was very weak, which we should expect resulted in lower-than-true bids.

 

2. Substantial chunks of study participants essentially got eliminated from the study by not cooperating at this stage: in other words, they either wouldn't provide a bid, or they'd provide a bid that was going to mess up the distribution too much (the distribution is already totally fucked, and that's with them removing a few bids that were "over $50,000"). In the case of the college-student population in study 2, this suggests that roughly a third of the study participants were unwilling to seriously entertain going without Facebook for a year.

 

3. Sticking with that population (Study 2, college-students), if you ignore the non-cooperating students, the mean bid was actually over $2,000 for a year of Facebook abstinence. The data here is -really- weird though. The average is $2,000, but the median is only $200. A couple people bidding $50,000 really messes things up here.

 

4. It's basically not possible to come up with a respectable estimate of what the 75%tile number is in this population, since as mentioned above the protest/no voters made up over a quarter of the population. But once you pack them somewhere into the right side of the distribution, the adjusted median is $600, and the bottom quartile marker is $100.

 

5. This is an attempt to value the primary Facebook product, Facebook. There exists the intriguing possibility that the relatively strong popularity of Instagram with college-aged kids actually makes them much more comfortable low-bidding against Facebook, since the explicit terms of the bidding did not mandate Instagram deactivation. This is suggested by the initially paradoxical result they observe where users that post relatively large quantities of photographs to Facebook actually seemed to value the service a bit less.

 

The paper: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0207101&type=printable

Raw data: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/YFALGA

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...