2/09/2010

Some Minor Kvetching, and More on the Facebook Automation Labs Hoax

Okay, so I just had my first American National Government class. I'm not so sure that went well for me. It is a subject I'm interested in, I'm very familiar with the material, the teacher is personable. However, I have serious Gordon rule problems (this is why this is on topic) - the only time I pick up a pen is to sign my name. There is a great reason for this. My handwriting looks like something that would be written on the side of a flying saucer in a "B" movie. This is not something I can readily change - my hands are agile, but very weak. I've been typing since I was about five or six years old. So yes, I do get the same muscle-brain stimulation that someone writing notes by hand gets when they put pen to paper, like I said this is what I grew up doing.


My notes for each test in government are to be handwritten and turned in for credit towards the Gordon rule. Because my letters (even in lower case) are quite large, I take up more page real estate (even when concentrating, and taking my dear, sweet time forming the letters), thus I get less words. On top of that when writing at speed, I'm still about three times slower than everybody else. I'm not worried about the tests, I'll ace those handily - I'm concerned about getting my Gordon rule credit. I stressed this multiple times. I'm not asking to use Google on my tests, but even being able to PRINT my notes would be nice. Maybe I could actually read them. I'm also a touch irked at being forced to do things like everybody else when I have a documented problem in that area.


I will now cease ranting.



Let's talk MLA instead. You'd think I would be ranting here, but I think MLA has improved dramatically since I last used it in Comp I. I don't think it is there yet, but it at least looks like they're starting to treat electronically formatted documents as real documents. I applaud the removal of the url from web cites. This is awesome - it was a cause of major formatting woes on my papers in Comp I.


Now I want to talk about what they could do to make things even better. They need to tighten up some of their citing formats - I'll use the email interview cite from number five on our MLA exercise:


An e-mail interview you conducted with Nora James on November 1, 2008.

This is how I cited it, and I believe it to be correct using the current MLA format:


James, Nora. No Subject. Email to Nora James. 1 Nov. 2008. Email.

This is how another student cited it:


Student Name Redacted -Ed., and Nora James. "Interview." Message to the author. 1 Nov. 2008. Web.

I think his method is more correct than the current MLA standard. I might even go so far as to put it: Interviewer Name (last, first). Interview with Interviewee Name. Interview Date. Email or Web. Most mail is "web mail" - very few people use an email client like I do so I feel "Web" is a valid medium. I could see some quibble room there though. Why does the interviewers name come first? Because the interviewer typically controls the interview. Interviewers write the questions, ad-lib follow up questions as needed, comment on interviewees responses. They typically have a point that they want to lead up to when interviewing someone, that is they are interviewing a person for a reason.


I could see reasonable arguments for making the citation more like a cite for a message board post. While only Google currently is using "threaded email conversations" as a model, you can see how they got there even in the traditional multi-level quoting system used by most mail clients. I think Google's model is a step forward and that eventually it will become the standard - just as most other email providers stepped up the amount of mail you could store in response to Gmail. But the key word there is "threaded" - just like a message board thread and the original post/responses on it. Just like Angel's message board system. There are more similarity's in the medium than differences, thus this is a reasonable way to handle cites where email is the medium.


The real solution to this problem is larger in scale. MLA, APA, and the administrators of the other systems need to sit down, take their hats in their hands and admit that they don't "get" the internet. That's okay, no shame in it. It isn't their job to figure out the internet. It is the World Wide Web Consortium's(W3C) job and when they are done with that moment of self confession, they should go have a friendly chat with them.


Sooner or later, this needs to happen and I can explain why. I've lived on both sides of this fence - my Dad had a typewriter shop when I was young, I learned to type at age six and I got good at touch typing when I was eleven. While I've been using a computer continuously since then, I used typewriters in the early days as much as I did my C64. Word processing on a computer hadn't quite arrived yet, though I had the means to do it, it was very expensive. So I learned about things like picas, that nobody in their right minds uses anymore. That's one of the things they need to discuss with the W3C - standards and measures.


On an MLA formatted paper you need one inch margins, double spacing, a header with your last name and page number. None of this works all that well on the web, and yeah we aren't doing papers here yet (PDF doesn't count), but I guarantee you that my youngest classmate's children will at the very least. Why not prepare for that day now and do it right, rather than trying to hash it out when it's needed? Specifically the reason it doesn't work on the web deals with "scaling" - good web pages do. MySPC and Angel designers please note this, this is what SPC's own web development curriculum has taught me this. It is also a W3C standard. What that means is that no matter how large your browser window is, the page's proportions adjust to fit it properly. The Firefox window on my iMac's viewing area measures: Viewport width: 1827px Viewport height: 968px. I run my browser very large. Because my Macbook's screen is smaller, the measurement in pixels (px) will be smaller even though the window takes up the same percentage of the screen. Designers typically test pages at 1024px wide by 768px tall and sometimes at 800px wide by 600px tall.


Things like margins and indents are possible, but things like non-breaking spaces in tables (you would likely use a table for citations for example) can cause issues in some browsers. Cough. Internet. Cough. Explorer. Cough. Excuse me, too many cigarettes, I need to quit. Just like people need to quit using IE until they commit to making it into a real browser. Sorry, mandatory cheap shot.


Arguing about browsers aside, if the MLA and the W3C got together and discussed standards, perhaps we could find a better way to render this stuff. Using tables, divs or non-breaking spaces is a very clumsy approach. I feel I am pretty good designer and I don't know how consistent my results would be creating papers for use on the web. New elements could be added or existing elements could be modified to suit academic uses. This would be a great thing.


The question I haven't answered above is "why" - why should the MLA bother with all this? Why can't they keep doing business as usual and ignore the web? Because nobody can. I don't care if you're Ted Kaczynski. If you are going to communicate with people, to share information, you cannot ignore this medium. The MLA's job is essentially organizing information, they especially cannot afford to ignore it. There are a number of positive benefits too. Coauthoring and peer reviewing become a great deal easier when you have a tool like Google Docs, or perhaps Google Wave in the future. I might even revise my stance on group work (hate it more than death) if it were done under a system like that - because you can see exactly who is contributing and who is not. Everybody wins when you can share information with more people - it's just that simple.


So let's talk about Facebook and the automation hoax. There are still some great questions left here: What's unique about this situation? Why did the message move to email as a medium? Why are so many commenters unwilling to believe this is a hoax?


Let's look at the current status of the posting first. Some stats: 1,280 likes (I put a like on it for ease of tracking so it displays as 1281), 395 comments since Wednesday. I'm seeing comments in multiple languages, the hoax has definitely spread in French and some are citing this as the origin of the hoax. I am not seeing any direct evidence of this, though French speaking users do seem active in propagating it. Greek and Turkish language speakers are also propagating it. The likes seem about the same as when I last viewed it late Friday night, there are about 45 additional comments.


Here we need to discuss the concept of "Opt In" which is kind of complicated. Let's watch a video that discusses how opt in impacts communications and I'll explain how it relates. The relevant part of the video is about five minutes in, but I'd like to encourage you to view the entire program. It's a great presentation.



So what does this have to do with viral communications (email forwards, facebook statuses, retweets)? All viral communications are an opt-in process with very few options. Forward that email, forward it with an addendum or don't forward it at all. So in economic terms, we can say each message has a "cost" though it may not be an easily measurable cost in financial terms. In terms of viral communications this is a sliding scale from the high end such as the Craig Shergold emails that ask the recipient to take action in the real world down to Facebook statuses that require a very small investment of time or effort.


As Dr. Ariely points out cost is not the only factor, nor is it the primary factor in a user's decision to believe in the Automation Labs hoax. In my opinion several other factors converged to make it possible: 1) Users unfamiliar with the way Facebook operates. Arthur C. Clarke said "Any sufficiently advanced technology is indistinguishable from magic." That is the average user's view of how the internet works - ask a non-technical person how he thinks his computer connects to the internet sometime, I guarantee you the results will be fascinating. In this case our user not only failed to understand how suggestive search operates, but how Facebook accounts work though they use both daily. 2) Facebook's response.


Facebook's response is notable as this is a somewhat unique situation. Facebook is a very closed system, in that all the content is focused on their site, even if you're bringing in content from a site that participates in Facebook Connect. As such they can moderate content when they choose to and have a mechanism in place to do just that.


Additionally there are other parts of Facebook security that are partially crowd sourced. If enough people report a status, comment or link as breaking the terms of service, someone will investigate it. As you might imagine these are very busy people.


Compare this to 1989. In 1989 chain letters were typically sent by email, occasionally you'd see one on USENET. Both of these are radically decentralized. While it was possible to report a sender to his ISP, it was never a given that anything would really be done about it. At the core, there really wasn't anyone in charge or a moderator in most cases. America Online was a new service at the time, and had an internal component which you could compare to Facebook as well as a gateway to the internet. Chain letters inside their system or through the gateway to the internet email system were not typically with the same zeal as other termination of service offenses.


Because of this closed system, where all roads lead to Facebook, they were in a unique position to try to cut the chain. While I'm not specifically versed on the inner workings of Facebook, but I can make an educated guess on how they did this from a technical perspective. They wrote a script which looked for certain word combinations in a status - it does not seem to apply to comments, and if the test is true then you block the user from posting the message and return a message saying why you're blocking it. One problem with this approach that we've already seen is that they did not write a script for every language used on Facebook. French, Greek and Turkish posters could continue to post this to their statuses.


Next, Facebook security linked to a debunking of the hoax. That link is in my last post, but you can also see it here. Unfortunately, someone or more likely several someones reported the link as "abusive." This prevented users from visiting it and seeing the debunking information. This caused further speculation and rumor mongering.


I'm not convinced that the debunking would have really had any effect, most people who already believed the hoax would continue to believe it in spite of the evidence. This is because these stories tend to not be a matter of "correct" or "incorrect," but a signifier - either of group membership, individual prejudices or a combination of the two. This is what my preliminary research has shown, but more on that later.


Two important events were also occurring in the background while this took place. The first is Facebook's change in look and feel. I suspect many found this unsettling - it really is a large and sweeping change. Many commenters in the thread about the Automation Labs hoax made noted very negative feelings towards the new look. Also, there is the continuing debate about privacy on Facebook. Recently the CEO of Facebook Mark Zuckerberg declared privacy "No longer a social norm." While I do not think the average transmitter was aware of the statement, I don't think it is impossible that it plays some role in these events.