Showing posts with label Ranty. Show all posts
Showing posts with label Ranty. Show all posts

5/03/2010

The Penultimate Post - Games As Art

My post tomorrow is the last one on this blog, my Composition II class will be complete. I've had a great time here. Be sure to check back tomorrow evening and see what I'll be doing next. In the meantime, this games as art thing is something I've wanted to write about for a while.

Recently, Roger Ebert declared that video games "can never be art." Later in the article he retreats to saying it won't be in the current generation of gamers lifetimes that games will be considered art. These statements produced a good bit of "Argle bargle bargle ARE TOO ART!" in the gaming community - this isn't that kind of post. Ebert is an intelligent man, he's entitled to his opinion - I just think he's wrong.

Ebert's article was a response to this TED talk from Kellee Santiago, who says that games already are art. While I agree broadly with her, I find a lot to quibble with as well. I'm going to look at both opinions and give you my take on the subject - and it is worth exactly what you paid to read it.

Let's look at Ebert's arguments first - stripping away the excess verbiage, he has a very definite idea of what constitutes art:

- Usually the creation of a single artist.

- Games are primarily about the "win" condition - IE points scored, levels completed, etc.

- People naturally "know" what great art is.

- No game can be compared with the great art works in other fields.

To be fair, perhaps I've missed something but these seem to me to be the main points. Ebert also notes that what a given person (versus a culture) considers art varies.

A Man Alone . . .

This statement was made in the context that video games are evolving from a primitive state to more sophisticated art. The example being, early cave paintings versus the old masters. Ebert points out that even in collaborative work, there is usually a single artist that gets the ball rolling. He believes that video game development, typically being a group effort, disqualifies it.

I'll even admit that I sympathize with his opinion, that I want to share it. I dislike "organized" art, such as schools of painting or sculpture. But I feel his opinion is irrelevant at best. If you go back to early gaming, even where the final product was developed by a team (the early Build engine games, for example) - there was still a lead developer who had a vision for what the finished product would look like. We could say the same thing with a more modern game like Brutal Legend, which was started by Tim Schaffer's vision and added to by other artists. It really isn't any different in that respect from a tribal dance or a group of cave paintings.

I really don't think his statement here is in any way important. Even if video games development didn't have a lead, even if it was wholly a collaborative effort from #include to the end statement - it doesn't really say much about the finished product.

4 teh Win!11!


Some games have "win" conditions. Halo, Civilization 4, Zork, Pong, Atari Combat - all these games have win conditions. It doesn't necessarily follow that they are not art. Just being "different" from paintings, music, dance, motion pictures, etc is not enough - you have to specifically state why having a win condition disqualifies games as art.

Ebert recognizes that some games don't have a win condition:

Santiago might cite a immersive game without points or rules, but I would say then it ceases to be a game and becomes a representation of a story, a novel, a play, dance, a film. Those are things you cannot win; you can only experience them.
This is wrong on a couple of levels. Most adventure games have neither points, nor hard and fast rules - they are primarily about the story that the designers want to tell, but they are not the same thing as a novel, audio book or visual presentation of a story. There is still a subtle win condition - completing the game, but it isn't the same as Space Invaders or Left4Dead. At the same time, I don't see how you can say that a game without a win condition isn't a game.

For instance, it is impossible to win World of Warcraft or Farmville. I'm only going to speak to the former here - I really don't understand what would possess someone to play Farmville. In the case of WoW, there isn't really a win condition set by the game - the player decides what constitutes winning. Due to the changing nature of games like these, even that is not a constant.

Some players just want to get their character to the level cap; others need every character they have on a server to reach that cap. Some don't want to level at all - they reach a certain arbitrary point (say, level 19) and decide to just do player versus player combat at that point. For other players it is having the very best gear available at any given point. The last is probably the most common goal, but as those goal posts are always in motion, there is no final end game until Blizzard stops developing the game. I'll provide a clearer example from my own experience.

I started a druid on the Khadgar server several years back named Devothumb, I still play him today. In the original WoW, leveling a druid was very difficult. This was because druids were a healing class, and by necessity didn't have a lot of damage dealing abilities. So to begin with, getting my druid to level 60, the level cap at that time was my goal. WoW had a storyline back then, but it often felt fragmented, often as not, I thought about my story of Devothumb the Druid and created ideas about what sort of person he was. Later, when I reached the cap, I started raiding.

Raids in WoW are really big dungeons that require a lot of people working together to complete. Back then it was 40 players, and you could try once a week. As I said, Druids were initially healers, but moreso the reason you brought them to raid was to help another healing character class - priests. I chafed under that requirement, I wanted to do something different and eventually reached that goal - I raided in Blackwing Lair as a Moonkin (damage dealing) druid after the 1.8 patch made it barely viable. After the Burning Crusade expansion was released the rules changed again, and so did my goals. Today, I'm back to healing when I have time which isn't as often as I'd prefer. My goal is simply to be a good healer and help people finish lower end content, five man dungeons. I'm pretty happy so far.

I Don't Know What Art Is, But I Know What I Like . . .


I take issue with the idea that people naturally know what great art is. Cezanne's early works were critically panned and physically attacked by some patrons. Few would argue that he was a great artist. I think it's telling that often as not we only award someone with the mantle of "great artist" after they are safely dead.

Both Santiago and Ebert talk about what is and isn't art. I hate to be vague, but when they say this I think they are using it as a stand in for "these are things I don't like." I don't think that's a valid way to approach the issue. By that rationale, Alas, Babylon a critically acclaimed novel isn't art. I don't care for it and think it was one of the worst novels I've every read.

Santiago specifically mentions The Simple Life as an example of where television went wrong, where it did something that wasn't art. I've never seen the show, but it doesn't look like something that would interest me. Frankly, most television and movies don't light my fire - but I won't write either medium off as "not art." I suspect there might even be good arguments for the show she mentioned as an artistic work. Who is right? It's a matter of personal preference.

Yeah, but It's Not Shakespeare . . .


Apples also are not oranges. But if nobody had compared gaming to television, movies, drama or novels, let me be the first. I think the game Sanitarium is as good as anything done by Hitchcock. I think one thing that hurt Santiago's argument here is that she focused on commercially successful independent developers. I think it's okay to show work that hasn't been rewarded by the market place, great art often isn't. That was certainly the case with Sanitarium, it was the only work produced by that development house, and it was a failure commercially. I think it is okay to show off the work of large studios and "AAA" games. They can be art too, even if they are successful in the market place.

But Do We Really Have To?


The argument that I have the most sympathy with in Ebert's essay is this. I'm not sure it is a good idea to have games considered as art. I think the art world and the Games-As-Art movement can often be so stodgy that they are a parody of themselves. At the end of the day, a game should be fun. If it fails in that, than I can say hopefully without contradiction that it might not be art.

Ebert wonders why it is important to "gamers" to have their medium declared art. I think there are a couple of reasons for this; recognition for the creative work that developers do is one of them. But the games as art movement is trumpeted louder by players than developers, so I think that is culturally speaking an afterthought. I think the reason that players want this is that we've been marginalized by the mainstream culture for a long time. The stereotype of a "gamer" is an overweight, socially maladroit male who lives in their parent's home longer than is socially acceptable. As with any stereotype sometimes it is true, but more often - especially today as game become more accepted culturally, it is not.

Additionally, it is a bulwark against the worst excesses of the mainstream media. Fox News reported that the game Mass Effect featured "full frontal nudity" and was "marketed to children." This was in no way true, this sterling bit of reporting was sourced as "I heard it from a friend." The same network claimed that Modern Warfare 2 is about "being a terrorist." Other mainstream outlets have treated the media with the same scorn and disregard. The majority of stories about video games (and all stories about video games with mature subject matter) are negative; absolutely without exception.

I think that because video games are a different medium, they are consumed differently than other forms. I think art is sometimes created (or is at least driven by) the player, not the developer. House of the Dead 2 & 3 for the Nintendo Wii, at least as it is played by my friends is a good example of this. It is a rail shooter - you move through a linear story, shooting zombies for score. The localization of this game is very poor resulting in a high quantity of "Engrish," and playing it is almost like an episode of Mystery Science Theatre 3000. Whereas a novel or a movie is mostly a one-to-one experience between the author and the consumer.

Fin


I think that both parties have it wrong. I feel that Roger Ebert does not have the necessary qualifications to determine whether video games are art. He is not, so far as I know, well versed in the medium.

In some ways though, Kellee Santiago's arguments make me even more uncomfortable. I agree with her that games are already art, but I'm less sure that you can point to game one (say, Braid) and say this is good art and point to game b (Grand Theft Auto) and say it isn't. At least not until well after the fact. The idea of using a study to promote a particular viewpoint on games as art feels to reminiscent of Socialist Realism or the Surrealist school that rejected Dali because his paintings sold.

I think we have to let these sleeping dogs lie, and after we are long cold in the ground, the people who come after us get to decide which games, novels, plays and movies are good art. I think creators should be free to make what they like, and while this will sometimes produce wonderful games, it will also occasionally produce something ugly or daft. We have to move forward being okay with that.

Follow Chris_Demmons on Twitter

2/09/2010

Some Minor Kvetching, and More on the Facebook Automation Labs Hoax

Okay, so I just had my first American National Government class. I'm not so sure that went well for me. It is a subject I'm interested in, I'm very familiar with the material, the teacher is personable. However, I have serious Gordon rule problems (this is why this is on topic) - the only time I pick up a pen is to sign my name. There is a great reason for this. My handwriting looks like something that would be written on the side of a flying saucer in a "B" movie. This is not something I can readily change - my hands are agile, but very weak. I've been typing since I was about five or six years old. So yes, I do get the same muscle-brain stimulation that someone writing notes by hand gets when they put pen to paper, like I said this is what I grew up doing.


My notes for each test in government are to be handwritten and turned in for credit towards the Gordon rule. Because my letters (even in lower case) are quite large, I take up more page real estate (even when concentrating, and taking my dear, sweet time forming the letters), thus I get less words. On top of that when writing at speed, I'm still about three times slower than everybody else. I'm not worried about the tests, I'll ace those handily - I'm concerned about getting my Gordon rule credit. I stressed this multiple times. I'm not asking to use Google on my tests, but even being able to PRINT my notes would be nice. Maybe I could actually read them. I'm also a touch irked at being forced to do things like everybody else when I have a documented problem in that area.


I will now cease ranting.



Let's talk MLA instead. You'd think I would be ranting here, but I think MLA has improved dramatically since I last used it in Comp I. I don't think it is there yet, but it at least looks like they're starting to treat electronically formatted documents as real documents. I applaud the removal of the url from web cites. This is awesome - it was a cause of major formatting woes on my papers in Comp I.


Now I want to talk about what they could do to make things even better. They need to tighten up some of their citing formats - I'll use the email interview cite from number five on our MLA exercise:


An e-mail interview you conducted with Nora James on November 1, 2008.

This is how I cited it, and I believe it to be correct using the current MLA format:


James, Nora. No Subject. Email to Nora James. 1 Nov. 2008. Email.

This is how another student cited it:


Student Name Redacted -Ed., and Nora James. "Interview." Message to the author. 1 Nov. 2008. Web.

I think his method is more correct than the current MLA standard. I might even go so far as to put it: Interviewer Name (last, first). Interview with Interviewee Name. Interview Date. Email or Web. Most mail is "web mail" - very few people use an email client like I do so I feel "Web" is a valid medium. I could see some quibble room there though. Why does the interviewers name come first? Because the interviewer typically controls the interview. Interviewers write the questions, ad-lib follow up questions as needed, comment on interviewees responses. They typically have a point that they want to lead up to when interviewing someone, that is they are interviewing a person for a reason.


I could see reasonable arguments for making the citation more like a cite for a message board post. While only Google currently is using "threaded email conversations" as a model, you can see how they got there even in the traditional multi-level quoting system used by most mail clients. I think Google's model is a step forward and that eventually it will become the standard - just as most other email providers stepped up the amount of mail you could store in response to Gmail. But the key word there is "threaded" - just like a message board thread and the original post/responses on it. Just like Angel's message board system. There are more similarity's in the medium than differences, thus this is a reasonable way to handle cites where email is the medium.


The real solution to this problem is larger in scale. MLA, APA, and the administrators of the other systems need to sit down, take their hats in their hands and admit that they don't "get" the internet. That's okay, no shame in it. It isn't their job to figure out the internet. It is the World Wide Web Consortium's(W3C) job and when they are done with that moment of self confession, they should go have a friendly chat with them.


Sooner or later, this needs to happen and I can explain why. I've lived on both sides of this fence - my Dad had a typewriter shop when I was young, I learned to type at age six and I got good at touch typing when I was eleven. While I've been using a computer continuously since then, I used typewriters in the early days as much as I did my C64. Word processing on a computer hadn't quite arrived yet, though I had the means to do it, it was very expensive. So I learned about things like picas, that nobody in their right minds uses anymore. That's one of the things they need to discuss with the W3C - standards and measures.


On an MLA formatted paper you need one inch margins, double spacing, a header with your last name and page number. None of this works all that well on the web, and yeah we aren't doing papers here yet (PDF doesn't count), but I guarantee you that my youngest classmate's children will at the very least. Why not prepare for that day now and do it right, rather than trying to hash it out when it's needed? Specifically the reason it doesn't work on the web deals with "scaling" - good web pages do. MySPC and Angel designers please note this, this is what SPC's own web development curriculum has taught me this. It is also a W3C standard. What that means is that no matter how large your browser window is, the page's proportions adjust to fit it properly. The Firefox window on my iMac's viewing area measures: Viewport width: 1827px Viewport height: 968px. I run my browser very large. Because my Macbook's screen is smaller, the measurement in pixels (px) will be smaller even though the window takes up the same percentage of the screen. Designers typically test pages at 1024px wide by 768px tall and sometimes at 800px wide by 600px tall.


Things like margins and indents are possible, but things like non-breaking spaces in tables (you would likely use a table for citations for example) can cause issues in some browsers. Cough. Internet. Cough. Explorer. Cough. Excuse me, too many cigarettes, I need to quit. Just like people need to quit using IE until they commit to making it into a real browser. Sorry, mandatory cheap shot.


Arguing about browsers aside, if the MLA and the W3C got together and discussed standards, perhaps we could find a better way to render this stuff. Using tables, divs or non-breaking spaces is a very clumsy approach. I feel I am pretty good designer and I don't know how consistent my results would be creating papers for use on the web. New elements could be added or existing elements could be modified to suit academic uses. This would be a great thing.


The question I haven't answered above is "why" - why should the MLA bother with all this? Why can't they keep doing business as usual and ignore the web? Because nobody can. I don't care if you're Ted Kaczynski. If you are going to communicate with people, to share information, you cannot ignore this medium. The MLA's job is essentially organizing information, they especially cannot afford to ignore it. There are a number of positive benefits too. Coauthoring and peer reviewing become a great deal easier when you have a tool like Google Docs, or perhaps Google Wave in the future. I might even revise my stance on group work (hate it more than death) if it were done under a system like that - because you can see exactly who is contributing and who is not. Everybody wins when you can share information with more people - it's just that simple.


So let's talk about Facebook and the automation hoax. There are still some great questions left here: What's unique about this situation? Why did the message move to email as a medium? Why are so many commenters unwilling to believe this is a hoax?


Let's look at the current status of the posting first. Some stats: 1,280 likes (I put a like on it for ease of tracking so it displays as 1281), 395 comments since Wednesday. I'm seeing comments in multiple languages, the hoax has definitely spread in French and some are citing this as the origin of the hoax. I am not seeing any direct evidence of this, though French speaking users do seem active in propagating it. Greek and Turkish language speakers are also propagating it. The likes seem about the same as when I last viewed it late Friday night, there are about 45 additional comments.


Here we need to discuss the concept of "Opt In" which is kind of complicated. Let's watch a video that discusses how opt in impacts communications and I'll explain how it relates. The relevant part of the video is about five minutes in, but I'd like to encourage you to view the entire program. It's a great presentation.



So what does this have to do with viral communications (email forwards, facebook statuses, retweets)? All viral communications are an opt-in process with very few options. Forward that email, forward it with an addendum or don't forward it at all. So in economic terms, we can say each message has a "cost" though it may not be an easily measurable cost in financial terms. In terms of viral communications this is a sliding scale from the high end such as the Craig Shergold emails that ask the recipient to take action in the real world down to Facebook statuses that require a very small investment of time or effort.


As Dr. Ariely points out cost is not the only factor, nor is it the primary factor in a user's decision to believe in the Automation Labs hoax. In my opinion several other factors converged to make it possible: 1) Users unfamiliar with the way Facebook operates. Arthur C. Clarke said "Any sufficiently advanced technology is indistinguishable from magic." That is the average user's view of how the internet works - ask a non-technical person how he thinks his computer connects to the internet sometime, I guarantee you the results will be fascinating. In this case our user not only failed to understand how suggestive search operates, but how Facebook accounts work though they use both daily. 2) Facebook's response.


Facebook's response is notable as this is a somewhat unique situation. Facebook is a very closed system, in that all the content is focused on their site, even if you're bringing in content from a site that participates in Facebook Connect. As such they can moderate content when they choose to and have a mechanism in place to do just that.


Additionally there are other parts of Facebook security that are partially crowd sourced. If enough people report a status, comment or link as breaking the terms of service, someone will investigate it. As you might imagine these are very busy people.


Compare this to 1989. In 1989 chain letters were typically sent by email, occasionally you'd see one on USENET. Both of these are radically decentralized. While it was possible to report a sender to his ISP, it was never a given that anything would really be done about it. At the core, there really wasn't anyone in charge or a moderator in most cases. America Online was a new service at the time, and had an internal component which you could compare to Facebook as well as a gateway to the internet. Chain letters inside their system or through the gateway to the internet email system were not typically with the same zeal as other termination of service offenses.


Because of this closed system, where all roads lead to Facebook, they were in a unique position to try to cut the chain. While I'm not specifically versed on the inner workings of Facebook, but I can make an educated guess on how they did this from a technical perspective. They wrote a script which looked for certain word combinations in a status - it does not seem to apply to comments, and if the test is true then you block the user from posting the message and return a message saying why you're blocking it. One problem with this approach that we've already seen is that they did not write a script for every language used on Facebook. French, Greek and Turkish posters could continue to post this to their statuses.


Next, Facebook security linked to a debunking of the hoax. That link is in my last post, but you can also see it here. Unfortunately, someone or more likely several someones reported the link as "abusive." This prevented users from visiting it and seeing the debunking information. This caused further speculation and rumor mongering.


I'm not convinced that the debunking would have really had any effect, most people who already believed the hoax would continue to believe it in spite of the evidence. This is because these stories tend to not be a matter of "correct" or "incorrect," but a signifier - either of group membership, individual prejudices or a combination of the two. This is what my preliminary research has shown, but more on that later.


Two important events were also occurring in the background while this took place. The first is Facebook's change in look and feel. I suspect many found this unsettling - it really is a large and sweeping change. Many commenters in the thread about the Automation Labs hoax made noted very negative feelings towards the new look. Also, there is the continuing debate about privacy on Facebook. Recently the CEO of Facebook Mark Zuckerberg declared privacy "No longer a social norm." While I do not think the average transmitter was aware of the statement, I don't think it is impossible that it plays some role in these events.