30 May, 2011

A cross-referenced book review. Press On: Principles of Interaction Programming

1. Introduction

York and Pendharkar define ubiquitous computing (ubicomp for short) as a post-desktop model of human-computer interaction in which information processing has been thoroughly integrated into everyday objects and activities. In the course of ordinary activities, someone "using" ubiquitous computing engages many computational devices and systems simultane-ously, and may not necessarily even be aware that they are doing so. This model is usually considered an advancement from the desktop paradigm.

More formally ubiquitous computing is defined as "machines that fit the human environment instead of forcing humans to enter theirs." (York and Pendharkar, 2004).

The cross-referenced book review at hand investigates two different articles related to ubiquitous computing. These articles set a frame of reference for discussion about ubicomp related ideas in Harold Thimbleby’s „Press on: Principles of Interaction Programming.“

The articles chosen for setting the frame of reference are as follows:

• „The Computer for the 21st Century“ by Mark Weiser
• „Yesterday’s Tomorrows: notes on ubiquitous computing’s dominant vision“ by Genevieve Bell and Paul Dourish

Thimbleby states (on the back cover of his book) that interactive systems and devices, from mobile phones to office copiers, do not fulfill their potential for a wide variety of reasons—not all of them technical. He argues that we can design better interactive systems and devices if we draw on sound computer science principles.

Interactive systems encompass much more than our desktops and laptops. Most people don’t really think about the devices they interact with on a daily basis. In most cases people have to „learn their language“ not the other way around. So if there’s an effective (e.g. a standardized) way to design/program interactive systems that fit the human environment, it’s well worth investigating.

2. Frame of reference
2.1 „The Computer for the 21st Century“ by Mark Weiser

Mark Weiser was a researcher at the Xerox Palo Alto Research Center and he tried to envision a world where „specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence.“

The wording may seem out of date, but the above prediction was made more than 20 years ago (in 1991 to be precise). A lot of the things Weiser envisioned, did come true at one time or another, but mostly due to evolutionary reasons (e.g. technological developments).

Weiser stated that „the idea of a personal computer itself is misplaced and the vision of laptop machines, dynabooks and knowledge navigators is only a transitional step toward achieving the real potential of information technology.“ The computers themselves have to vanish into the background. Such a disappearance is a fundmental consequence of human psychology (as people get used to new technology). Today’s laptops, tablets and smartphones are more than able to accommodate the ubicomp criteria (e.g. they are able to utilize the hidden information layer), but they do not dissappear into the background.

Weiser argued that even the most powerful notbook computer, with access to a worldwide information network, still focuses attention on a single box. Today’s multimedia machine makes the computer screen into a demanding focus of attention rather than allowing it to fade into the background. Today we can project a touch sensitive keyboard on any solid surface or use our smartphones to read QR codes (which contain additional information), so it's safe to say that there have been a lot of advancments in the area of ubiquitous computing, but the everyday use of such technologies is not very common yet. The desktop paradigm has not been vanquished yet.

He points out two issues of crucial importance: location and scale. Weiser thought that ubiquitous computers must know where they are. If a computer is aware of it’s location, it can adapt to the specific needs of that location. Also, he stated that ubiquitious computers will come in different sizes, each suited to a specific task. Today light sensors adjust the brightness of the screen and movement sensors rotate the smartphone screen if the phone itself is tilted in any direction. Location problems have been solved with user-specific profiles and GPS (and A-GPS) information. It's hard to say if that's what Weiser imagined, but location-specific services have become increasingly popular and devices support those services as well.

All in all, Weiser portrayed a technological fairytale at the time. His article was both idealistic and innovative. He argued that ubiquitous computing is something that will arrive when technology develops over time. That was the dominating notion, but Weiser also stated that even our everyday life holds ubiquitous interactions with our environment (e.g. the process of reading something). Weiser was not very specific about how this ubiquitous computing will be achieved,
his arguments were very technology-driven, he believed that ubiquity will be achieved with "pads" and "tabs". In the end Weiser leaves things open when he states that ubiquitous computers will reside in the human world and pose no barrier to personal interactions. Also, ubiquitous computing will help overcome information overload.

2.2 „Yesterday’s Tomorrows: notes on ubiquitous computing’s dominant vision“ by Genevieve Bell and Paul Dourish

Genevieve Bell and Paul Dourish analyzed and reviewed Mark Weiser's article in 2005. They state that ubiquitous computing as a research area is unusual amongst the technological research arenas. Ubicomp is driven by the possibilites of the future, whereas other research areas in the field of technology are driven by building upon and elaborating a body of past results.

Bell and Dourish agree that Weiser's article was influential since it articulated the research agenda for the topic. Almost 25% of all papers published in the Ubicomp conference between 2001 and 2005, cite Weiser's articles. Bell and Dourish are concerned with "the balance between past, present and future embedded in conventional discourses about ubiquitous computing." Weiser's vision is old and needs to be reviewed and re-evaluated.

Bell and Dourish present two alternative visions of ubiquitous computing (from Singapore and Korea). The authors try to understand the relationship between ubiquitous computing's envisioned future and our everyday present. They want to know what influence does this have on contemporary ubiquitous computing research and what motivates the remarkable persistance and centrality of Weiser's vision.

The authors argue that ubiquitous computing is already here. It's not how we envisioned it to look like though. Current practices are rendered irrelevant since the future is "right around the corner." And that reasoning behind that discourse allows the researchers to dodge responsibility for the present situation. They also argue that "the seamlessly interconnected world of future scenarios is at best a misleading vision."

Bell and Dourish point out that today's technological landscape is radically different from the one where Weiser formed his vision of ubicomp. Yet, the "proximate future" is still what the ubicomp research focuses on. The authors say that it may be because ubicomp is not really about the present, but more about the future that is everchanging by nature. The other way to look at it is that ubicomp is already here and it has taken a different form than we first envisioned. Bell and Dourish state that since we've already entered the 21'st century, we should try to envision the "computer of now."

The authors point out that Singapore is one of the most connected countries in the world. They argue that the life singaporeans live is, in essence, a good example of ubiquitous computing in action - outside the labs and research centres. The importants of the Singapore vision is that it's a collective practice, rather than a set of discrete individual actions. Singaporeans are also very keen on phones and related services. The Korean vision of ubicomp is closely connected with internet and various services related to that inherent connectivity.

William Gibson has said that "The future is already here; it's just not very evenly distributed." Thus, the domain of the ubicomp research should be the present, rather than future.

3. Book review and synthesis

In the beginning of the book (Part 1), Thimbleby illustrates how interactions in the real world work and why they are so complicated.

Thimbleby argues that interactive devices around us can fulfill their objective more efficiently with the help of interaction programming. He states that programmers can be more creative and central when it comes to interaction design. Also, they have the necessary technical skills that designers often times lack.

Thimbleby states that we can design better interactive systems/devices if we use various computer sciences principles (for example state machines and graph theory). He's telling people to find creative solutions for various design problems. That, in this review's context, is a way to increase the ubiquitous features of various interactive devices.

Thimbleby also says that: "Good user interface design isn’t a matter of following a recipe you can get from a cookbook: it’s an attitude, along with a bunch of principles, skills, and provocative ideas …." This means that there's no single generic way to address design issues in the ubiquitous world, although the book at hand may provide an overview of helpful insights and principles.

Part 2 of the book is where Thimbleby approaches the formal part of the interaction programming principles and insights. He even gives code examples to illustrate how to solve a specific design problem. The term interaction programming argues the concept where a designer has to create the interface and the programmer has to make it work.

Part 3 of the book is devoted to interaction design practices and is by far the most important part of the book, since it gives specific guidelines on what to avoid and what to strive for when dealing with interaction design.

The overall tone of the book suggests that there's a lot in terms of usability that can be achieved and improved with programming and sound computer science principles.

The book relates to the previous articles by adding the missing piece. While the two articles were very technology-driven, the book provides a software angle of the ubicomp discourse. It's clear that Weisner was correct about the fact that we do live in the constant state of "exploring the tomorrow", be it the 21st century or a similar metaphor. Technology will keep developing and thus, various ubicomp practices will emerge more often. Bell and Dourish were also correct when they proposed the Singapore vision and the Korean vision of ubicomp. We should also deal with today's issues and in a way, ubicomp has already arrived. And trying to figure out how to create a ubicomp experience (like Thimbleby does with his book), is also one of the difficult tasks that needs to be addressed.

It's safe to say that the ubicomp discourse is like Tallinn, it's never finished. Like Bell and Dourish stated in their article, it's hard to state that "we're here now." The question ofcourse would be "what now?"

4. References

1. J. York, P.C. Pendharkar, "Human–computer interaction issues for mobile computing in a variable work context," Int. J. Human-Computer Studies 60 (2004) 771–797
2. Weiser, M. (1991). The Computer for the 21st Century. Scientific American, 94-104.
3. Bell, G., & Dourish, P. (2006). Yesterday’s tomorrows: notes on ubiquitous computing’s dominant vision. Personal and Ubiquitous Computing, 11(2), 133-143. doi: 10.1007/s00779-006-0071-x.
4. H. Thimbleby, "Press on: Principles of Interaction Programming", MIT Press (2010)

27 January, 2011

Applying theories

Generative art and generative literature are best perceived through practice (e.g. theory is not as captivating as the examples found online). One does not have to be an artist to create fascinating masterpieces. Only a clear understanding of rules and how to design them, is required. Imagination is paramount.

The first thing that comes to mind while discussing generative art, is John Conways "Game of Life." In fact, it's not really a game. There are no winners or losers. The concept of the player is lost as well. There is a designer, though.

The rules are very simple:

The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
2. Any live cell with two or three live neighbours lives on to the next generation.
3. Any live cell with more than three live neighbours dies, as if by overcrowding.
4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.



The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed—births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick (in other words, each generation is a pure function of the preceding one). The rules continue to be applied repeatedly to create further generations.

The game is located at the following address: http://www.bitstorm.org/gameoflife/

Of the examples studied during the course, the bitmap example (Corcuff, 2008) with randomized pixel generation is located at the following address: http://www.random.org/bitmaps/

The exact same (a figure of speech) 10x10 pixel squares can be created to illustrate the beforementioned example. Squares up to 300x300 pixel with 4x zoom can be achieved with this tool.

Random.org hosts quite a few randomization centric solutions. Out of these the name Samuel Beckett caught the eye (randomly generated short prose). This relates directly to generative literature. The website states that:

"In 1969, the Irish-born writer Samuel Beckett (1906-1989) published the piece of short prose Sans in French. One year later, in 1970, it was followed by a translation (by Beckett himself) into English titled Lessness."


"An interesting characteristic of this work is its combination of dense aural and structural patterning and apparent randomess. Both versions consist of 24 paragraphs containing a total of 120 sentences. Each sentence occurs twice: once in the first half of the work and once in the last. Beckett later indicated to critics that the order in which the sentences in Lessness appear had been determined randomly by drawing little slips of paper out of a container." (Random.org, 2011).

The Random.org site allows you to create your own version of "Lessness". Although access to the site is free, it's still restricted due to copyright considerations. A password can be obtained by sending an e-mail.

Generative art has been discussed in great legths, but so far, generative literature has not been defined in this blog. Thus:

Generative literature, defined as the production of continuously changing literary texts by means of a specific dictionary, some set of rules and the use of algorithms, is a very specific form of digital literature which is completely changing most of the concepts of classical literature. (Balpe, 2005).

An overview of generative art and literature examples can be found on the following addresses:

http://www.random.org/
http://blog.hvidtfeldts.net/index.php/generative-art-links/
http://generatorblog.blogspot.com/


References

Wikipedia. (2011). Conway's Game of Life. Available: http://en.wikipedia.org/wiki/Conway's_Game_of_Life. Last accessed 27.01.2011.

Random.org. (2011). Possible Lessnesses. Available: http://www.random.org/lessness/. Last accessed 27.01.2011.

Random.org. (2011). Random bitmap generator (BETA). Available: http://www.random.org/bitmaps/. Last accessed 27.01.2011.

Balpe, J-P. (2005). Principles and Processes of Generative Literature. Available: http://www.brown.edu/Research/dichtung-digital/2005/1/Balpe/index.htm. Last accessed 27.01.2011.



16 December, 2010

Task 14: Final reflections

On the practical side of things (technology, tools, interactive environments) I like to keep myself informed. Thus, I didn't discover anything I hadn't heard about before. On the theoretical side, I was overwhelmed with the topic of interactivity. I learned that there's no single and correct way to address the topic of interactivity. Texts from 5-10 years ago may turn out to be outdated. And if they're not outdated, they tend to be vague in order to capture the essence of interactivity as a whole.

I enjoyed learning about the activity theory and how to use it as a framework, but I still believe that it's dependent on the researcher and lacks objectivity. I also enjoyed studying a journal article by Kiousis "Interactivity: a concept explication."

Regarding this course, I would have expected to receive the course structure beforehand. Personal feelings aside, it's also required by the university. One cannot plan ahead if communication is flawed. Also, I received zero feedback about my writings during the course. Thus, I feel dissapointed. Why not give 2-3 well-planned tasks and actually provide sufficient feedback? The lack of feedback made me indifferent and I lost motivation regarding this course. How is one to know if he/she is on the right track?

If there's a greater concept behind all this, do let me know. I know it's nice to think outside the box once in a while and uni. is the place to do it, but gaining a personal understanding should work a little differently.

14 December, 2010

Task 13: Redesigning and re-instrumentalising activities

"With very few exceptions all emotions operate on the stage of interpersonal interactions." (Toda, 1999, p. 21).

Grudin statest that his focus is on the effects of placing technology in the middle of these interactions. (Grudin, 2000).

Interpersonal communication is easy nowadays. Landline phones, cellphones, payphones, SMS's, e-mails, VOIP calls, IM clients, video calls. You name it, you got it. Common, nothing new.

Grudin states that:

Novel forms of mediation alter or remove critical aspects of context associated with
natural or familiar interactions. Words may be transmitted, but not the tone of voice; or voice may be transmitted, but not facial expressions; or voice and facial expressions without hand and arm gestures; or all of it may be transmitted, but from a different perspective than is available when present in person
. (Grudin, 2000).

This statement is arguable. Video calling enables to transmit words, facial expressions, tone and even body language if participants choose to express it via the camera. The only thing that may not be transmitted, is smell.

Grudin proceeds to make an interesting point:

Greater visibility can increase efficiency, but it also creates complications, raising issues of anonymity,privacy, censorship, security, reciprocity, accountability, and trust. Cognition and emotion are intertwined throughout. (Grudin, 2000).

So, being online on Facebook (status is visible to the peers) may reveal that one is slacking off rather than doing hard work. But making business calls via Skype may result in more efficient communication and/or smaller expenses.

That's communication, the most basic interpersonal activity there is. But what if we were to look at communication in a specific context. Education for example.

E-learning has been around for years now. General education schools promote E-Kool (e-school) as a platform where the teachers and parents can communicate and inform one-another. Universities even have online-degrees, online-courses and various platforms for online interaction.

The best way to redesign education is to create a "networked school." Nowadays, there's no real need for physical infrastructure (e.g. the school itself). If a professor is employed, why not record his/her lectures and share them online. Students will gain the possibility to re-view videos when needed. Assignments can be handled via forums, wikis and blogs. Online-chats (and seminars) can be hosted via Skype. Professors and students can thus be geographically independent.

There's no real reason why this system wouldn't succeed eventually. Transportation will become more expensive (given that there's no alternative to fossile fuels) and online learning (as explained before) is a very viable alternative.

References:

Grudin, J. (2000). Digitally Mediated Interaction: Technology and the Urge System.
In G. Hatano, N. Okada & H. Tanabe (Eds.), Affective Minds, 159-167, 2000.

05 December, 2010

Critical review - Chance and Generativity by Marie Pascale Corcuff

Introduction

Marie-Pascale Corcuff states that we, human beings, don't generally like to rely on chance. We like to control our life and thus, the use of randomness, seems to be an abdication of our power of decision.

The paper at hand is about the use of chance in generative processes. The author summarizes the key argument by stating that diversity may be obtained, without loosing identity.

Corcuff proves through a series of experiments that diversity in generative art processes can be obtained without loosing idetity. The meaning behind text can be somewhat hard to grasp at first sight. For example, the reader is expected to have previous knowledge about IFS (Iterated Function Systems). In mathematics IFS are a method of constructing fractals. Symbolically speaking:

\{f_i:X\to X|i=1,2,\dots,N\},\ N\in\mathbb{N}

The strengths and weaknesses of this paper relate to the same topic: experiments/examples. The author anticipates an audience familiar with mathematics. While the examples are sufficient (and it's always nice to have something explained to the reader in black and white), they may not be understood by all.

The purpose of this paper is to discuss and illustrate the different meanings of chance relatively to probability, combinations, imprevisibility, coincidence, chaos, disorder, control, intentionality - e.g. explain the concept of generative art.

Structure wise the paper is fairly simple. Chance is discussed in relation to unpredictability, insignificance and diversity. Examples of formal research are presented as well.

Chance and unpredictability refer to a set of rules required to generate unique results (generative art). A saying by Philip Galanter is chosen to illustrate this concept: "Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a complexed work of art."

Conway's game of life (http://en.wikipedia.org/wiki/Conway's_Game_of_Life) is a good example of this connection. Also, we can sow the seeds, but it's hard to predict how the garden will look like (Corcuff uses the garden metaphor as well).

Chance and insignificance refer to different methods of obtaining randomness for the sake of generative art. It's the insignificance of the result which is the proof of the randomness of the data. Chance and diversity argue that by defining a process, randomness may be useful to provide diversity. Children are not more complex than their parents, but they are different.

Summary

The key points of Corcuff's paper state that:
  • Diversity may be obtained, without loosing identity.
  • Chance refers to a set of rules defined to generate randomness.
The paper lacks a concrete summary of the findings. There is a conclusion, but it's very brief (probably due to the event-dictated format).

Generative art

A quote (used earlier in this review and also by mrs. Corcuff) by Philip Galanter best describes the essence of generative art:

"Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a complexed work of art."

Generative art is not solely computer based. Corcuff uses a garden metaphor to explain how chance works in relation to generative. Regarding historical context, Corcuff lists a book by Jaques Monod titled "Chance and necessity."

Critique

The paper lacked a summary of findings. There were a lot of examples (which is always nice when a difficult subject needs to be explained), but the conclusion was very brief. Also, the terms unpredictability, randmoness, diversity, insignificance were explained (through examples), but not defined. Thus, it was hard to distinguish what they meant and how they relate to each other.

Conclusions

Corcuff's opinion was that use of chance in artistic generative processes can produce diversity without sacrificing identity. This paper is very good in terms of explaining how generative processes work (e.g. the 10x10 pictures, the Library of Babel etc). Although in requires a technical background in some parts, it's still useful for those who are new to generative art and need an introduction.

References

Corcuff, Marie-Pascale. 2008. Chance and Generativity. In GA2008, 11th Generative Art Conference, 189-199. Retrieved 21.12.2010 from http://www.generativeart.com/on/cic/papersGA2008/16.pdf

Wikipedia. (2011). Conway's Game of Life. Available: http://en.wikipedia.org/wiki/Conway's_Game_of_Life. Last accessed 21.12.2011.

Wikipedia. (2011). Iterated function system. Available: http://en.wikipedia.org/wiki/Iterated_function_system. Last accessed 21.12.2011.

28 November, 2010

Task 10: Applying activity theory into practice.

Introduction

In 2008 Lorna Uden, Pedro Valderas and Oscar Pastor proposed a three-way approach (based various contributions) on how to apply activity theory to the analysis of Web application requirements (see source). This example of applying activity theory into practice is suitable for the task at hand (i.e. comparing PLENK2010 to the New Interactive Environments course and defining the activity systems for both).

The methodology consists of the following steps:

1. Clarify the purpose of the activity system

The purpose of this step is to understand the context within which activities occur and to reach a thorough understanding of the motivations for the activity being modelled and any interpretations of perceived contradictions.

2. Analyse the activity system and produce the activity system

This step involves defining, in depth, the components of the given activity, namely, the subject, object, community, rules and division of labour.

3. Analyse the activity structure

This step involves decomposing each activity into actions and operations.

The given methodology will be used while defining and comparing the activity systems of PLENK2010 and New Interactive Environments courses.


PLENK2010 activity system

1. Clarify the purpose of the activity system

The purpose of PLENK2010 (Personal Learning Environments Networks and Knowledge 2010) is to fascilitate a randomized, but highly personal collaborative learning experience online (a so-called connectivist course). This experience will not be received through a single place or an environment. Users will pick and work with preferred content.

2. Analyse the activity system and produce the activity system




3. Analyse the activity structure

A fellow student (Ilya Å morgun) has produced a very neat comparison of the two activity systems. Since there's no specific need to reproduce this content, a link is provided instead: http://shmorgun.net/wp-content/uploads/2010/11/Activity-System-Comparison.png


New Interactive Environments activity system

While analysing and constructing an activity system for the NIE course, some inherent conflicts were discovered. Activity systems for the two courses can be absolutely identical. But they can also be very different. What determines this vast variation?

For example, we can take a closer look at the objects of the aforementioned activity systems. The PLENK2010 course sets out to provide a very unique learning experience. A person is expected to choose what he/she reads and how he/she repurposes that content. If we break it down to keywords, we can define the PLENK2010 object as:

- user generated content
- personalized learning experience
- use of various tools

The NIE course activity system object may be defined identically. On the other hand, we can view an alternative level of detail and interpretation by Ilya and discover that the two activity systems are fairly different. But isn't a blog post the same as "user generated content"?

Comparison and conclusion

As it was pointed out earlier, there's more than one way to look at these activity systems. It's my personal opinion that activity theory cannot be applied into practice. Activity theory and related analysis is very dependent on the interpretation. How to choose the suitable level of detail? How to define an activity system objectivly? This topic seems to have too many loose ends and way too much context for one blog post to handle.

18 November, 2010

Task 9: Exploring activity theory as a framework for describing activity systems

To gain a better knowledge of activity theory and activity systems, several sources were studied:

1. Kuuti, K. (1995). Activity theory as a potential framework for human-computer interaction research. In. B. Nardi (Ed.), Context and Consciousness: Activity Theory and Human Computer Interaction. Cambridge: MIT Press.
2. Uden, L., Valderas, P. & Pastor, O. (2008). An activity-theory-based model to analyse Web application requirements. Information Research, 13(2).

The first source (a journal article by Kuuti, K.) focuses on human-computer interaction (HCI) research. Activity theory is considered a potential framework for such studies. Kuuti merely provides an overview of the HCI related research and relevant criticism. He defines activity theory as:

"... a philosophical and cross-disciplinary framework for studying different forms of human practices as development processes, both individual and social levels interlinked at the same time."

Kuuti also defines the three key principles of activity theory:

- activities as basic units of analysis (minimal meaningful context for individual actions must be included in the basic unit of analysis)

- history and development (activities are not static or rigid unities but they and their elements are under continuous change and development)

- artifacts and mediation (an activity always contains various artifacts such as instruments, signs, procedures, machines, methods, laws, forms of work organization, etc.)

A visualization of the activity theory is provided as follows (courtesy of the internet):


Kuuti does not explain the essence of the activity theory very clearly, that's why an alternative visualization was provided (courtesy of the internet). The definition below the visualization makes things a lot clearer.

Since the goal was to explore activity theory as a framework for describing activity systems, we most certainly have to look into what Uden, L., Valderas, P. & Pastor, O had to say about it. Their journal article has a specific paragraph about "applying activity theory to the analysis of web application requirements." (More information available here: http://informationr.net/ir/13-2/paper340.html).

The best short summary regarding what the activity theory is about is located at this address: http://www.learning-theories.com/activity-theory.html

16 November, 2010

Task 8: From mass media to personal media

Marika Lüders has written an article titled "Conceptualizing personal media" (2008) in the New Media & Society magazine. She states that "the digitization and personal use of media technologies have destabilized the traditional dichotomization between mass communication and interpersonal communication, and therefore between mass media and personal media."

What she's trying to say is that "traditionally mass communication is comprehended in contrast with interpersonal communiation." Lüders states that "with the digitalization of media, in certain cases the same media technologies are used for both mass media and private individual purposes."

What does it mean, really? Nowadays the dynamics have changed quite a lot. Anyone with an internet connection can use various publishing platforms, share photos or befriend total strangers. The digitization and personal use of media technologies have empowered people. The new media discourse has had a serious impact on how people communicate with eachother.

It's not only that the same technologies are used for both mass media and private individual purposes, but the line between public and private has become increasingly thin. People have become more vulnerable than ever. People are sharing more information about their lives than ever.

The following video is a good example regarding how we hand out information without really thinking things through (thus creating privacy risks) and how these new technologies are used for interpersonal communication. Would anyone actually share this information with roughly 200+ people? Or is it just a habit we've become used to (since technology enables it)? And most importantly, is that how life's going to be from now on?

14 November, 2010

A Companion to Digital Humanities: Multimedia. A Critical Review

Introduction

"A Companion to Digital Humanities" offers a collection of articles (37 alltogether) about the field of humanities computing. One of those articles is written by Geoffrey Rockwell and Andrew Mactavish. It is titled "Multimedia" (A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, John Unsworth. Oxford: Blackwell, 2004. http://www.digitalhumanities.org/companion)

The following is a critical review regarding this article. A short introduction about the topic is provided by the authors:

"How do we think through the new types of media created for the computer? Many names have emerged to describe computer-based forms, such as digital media, new media, hypermedia, or multimedia. In this chapter we will start with multimedia, one possible name that captures one of the features of the emerging genre."

The article was produced in 2004, so we have to take into account that the contents may not be as conteporary as one might expect.


What is Multimedia?

Rockwell and Mactavish introduce two definitons of multimedia and propose a third one that "combines many of the features in the others with a focus on multimedia as a genre of communicative work:"

A multimedia work is a computer-based rhetorical artifact in which multiple media are integrated into an interactive whole.

The authors go on and use parts of the definition to analyze multimedia. These parts include:

- "computer based"
- "rhetorical artifact"
- "multiple media"
- "integrated whole"
- "interactive"

While being short and concrete, this definition needs to be updated and some critique is required. Today multimedia can be created and accessed by a number of devices including cellphones, MP3 players and tablets. The list is by no means final. Cellphones have 1 GHz processors and the word "computer" may refer to a variety of devices.

Also, "rhetorical artifact" is a very vague term to describe content. If we look at Facebook, it's certainly a multimedia gateway (sending, receiving and creating videos, photos and text using the Facebook platform). On the other hand, it's a collection of data and it has administrative capabilities regarding content.

The authors have excluded randomness from the definition of multimedia by stating that there's a creator and the intent for the work to be experienced as an artistic whole. Different platforms and technologies allow random content creation (user generated content) that is forwarded to the recipient in a feed form. The feed itself can combine various types of media and provide access to various types of media. By recording chat-roulette (www.chatroulette.com) sessions one may compile a totally random multimedia work. Examples can be found on the Youtube.


Types of Multimedia

The authors argue that "the challenge of multimedia to the humanities is thinking through the variety of multimedia artifacts and asking about the clusters of works that can be aggregated into types."

They propose the following list:

- Web hypermedia
- Computer games
- Digital art
- Multimedia encyclopedia

Such categorization is always dangerous. It's leaves little room for change and things do tend to change. Especially in the field of multimedia and everything interactive. Web hypermedia is a good primary category, but it's too general, today all the other categories might as well be parts of the first one. Rigid categorization is out of date, because information (regarding it's objective) is consumed and manipulated online.


History

The history section of this article requires little critique, it's accurate, but not very comprehensive. It's understood and probably related to length restrictions and the primary focus of the article.

Numbers and text, images, desktop publishing, authoring environments, sound, digital video and virtual space are discussed. Various categorizations may exist.


Main Academic Issues

Rockwell and Mactavish introduce few of the academic issues related to multimedia:

- Best practices in multimedia production.
- Game criticism and interactivity.
- Theories and histories of multimedia.

The list is by no means comprehensive. Many other issues exist, but Rockwell and Mactavish do not argue otherwise. All of the mentioned issues are still being studied today, but some new and interesting issues have emerged. For example, the use of multimedia in the field of education and learning.


Conclusion

The authors seem to think of multimedia as something static. They propose very rigid and concrete categories and forget to mention the dynamic essence of multimedia. It's an overview on the topic of multimedia, not a comprehensive analysis. The authors understand that and suggest links and materials for further reading. Even so, the article should have taken into account the possibility of change.

07 November, 2010

Task 7: In search for my own understanding of interactivity

We have been living the virtual revolution for the past 15+ years. Everything from how we interact with one-another to how we do business, or even how we learn, has changed. The WWW has become the most popular technology in the world. We shop online, we communicate online, we order food online, we pay our bills online, we work online, we watch movies online. There's not a lot of activities that lack online presence or interactivity for that matter. You can even have online sex. Or commit suicide while online. The humankind has become totally immersed.

Jensen quotes a Newsweek article from 1993 where the term interactivity was described as follows:

... a huge amount of information available to anyone at the touch of a button, everything from airline schedules to esoteric scientific journals to video versions of off-off-off Broadway. Watching a movie won’t be a passive experience. At various points, you’ll click on alternative story lines and create your individualized version of “Terminator XII”. Consumers will send as well as receive all kinds of data ... Video camera owners could record news they see and put it on the universal network ... Viewers could select whatever they wanted just by pushing a button ... Instead of playing rented tapes on their VCRs, ... [the customers] may be able to call up a movie from a library of thousands through a menu displayed on the TV. Game fanatics maybe able to do the same from another electronic library filled with realistic video versions of arcade shoot-’em-ups ... (Newsweek, 1993:38).

In 2008, Michael Wesch (an anthropologist), presented his speech titled "An anthropological introduction to Youtube" at the Library of Congress (US). This video is probably the best way to introduce how the things Jensen wrote about, have become a reality.


Kiousis came to a conclusion that there's no point in refining a single definition. He felt very strongly about combining various definitions of interactivity to form one that encompasses all the possible characteristics of the term at hand. He proposed a conceptual definiton:

Interactivity can be defined as the degree to which a communication technology can create a mediated environment in which participants can communicate (one-to-one, one-to-many, many-to-many), both synchronously and asynchronously, and participate in reciprocal message exchanges (third-order dependency). With regard to human users, it additionally refers to their ability to perceive the experience as a simulation of interpersonal communication and increase the awareness of telepresence.

I agree that this may, in fact, be the best academic approach so far. But I would take this approach even further. Since interactivity (the term and the consequent reality) is in a constant state of flux, we cannot escape the need of redefining everything after a short while. Be it a new technology, a new platform for interaction or a change in legislation.

From a scientific perspective, one must always seek to narrow down the focus of any problem at hand. So, it's imperative that we discuss the term interactivity in relation to specific categories. But how would one define or list these categories? My guess is that interactivity should not be perceived as something that is limited by anything.

In 2010, Mark Zuckerberg stated that they "are building a web where the default is social" (read more about it here: http://techcrunch.com/2010/04/21/zuckerbergs-buildin-web-default-social/). What it means, is more interactivity. Everywere.

Let's analyze the status quo (e.g. my current level of interactivity):

- Skype
- Gmail & Google Talk
- Blogger
- Facebook
- TechCrunch

These are the webpages/applications currently active in my laptop (on a quiet evening). On any given day that list would be a bit longer. One might find open sessions of MSN Messenger, Mashable, NY Times, WSJ, HBR, eBay, Youtube and many more sites/apps/forums on my computer screen.

So, there you have it, every type of interaction from an asynchronous Youtube video playback and online commentaries to instant messaging and real time video/voice chat. We have it all and we don't even notice it anymore.

I would argue that the main concept of interactivity has remained unchanged, but it's application has become more diverse.

31 October, 2010

Task 6: "Interactivity: a concept explication." A summary.

The article at hand was written by Spiro Kiousis and published by SAGE Publications in their New Media & Society magazine in 2002. The title suggests that the concept of "interactivity" will be explicated (clarified). In the abstract Kiousis states that interactivity is both a media and psychological factor that varies across communiation technologies , communication contexts and people's perceptions.

Kiousis argues that with the ongoing influx of new communication technologies, many traditional concepts in mass communication are being redefined, reworked, and reinvented. Many scholars have highlighted the confusion embedded in theoretical discussions surrounding the concept of interactivity. These questions inquire wheter interactivity is a characteristic of the context in which messages are exchanged; is it strictly dependent upon the technology used in communication interactions; or is it a perception in users' minds?

Kiousis executed the following steps to complete the project:

(1) provide a general background of interactivity;

In this step Kiousis states that one must first pinpoint some relevant assumptions (e.g. that interactivity is associated with new communication technologies). He arrives to a conclusion that the paucity of theoretical concencus can have dramatically different implications in more practical and operational terrains.

(2) survey relevant literature on the concept;

Kiousis explains that the literature review of interactivity is cumbersome due to the vast implicit and explicit definitions prepared by researchers from many different academic and professional perspectives. It's important to narrow the focus. Kiousis defines two dimensions for the literature: 1) Intellectual perspective; 2) Object emphasized.

(3) identify the concept's central operational properties;

Kiousis states that while based on the literature reviews, it is clear that operational definitions of interactivity revolve around measuring specific dimensions or subconcepts of the term. He then provides an lengthy overview of relevant theory.

(4) locate present definitions of the concepts;

Kiousis provides a few examples of the definitions of interactivity and then arrives to the conclusion that some common variables exist (provided as follows).

Two-way or multiway communication should exist, usually through a mediated channel. The roles of message sender and receiver should be interchangeable among participants. In addition, some third-order dependency among participants is usually necessary. For the most part, communicators can be human or machine, often contingent upon whether they can function as both senders and receivers. Individuals should be able to manipulate the content, form, and a pace of a mediated environment in some way. Users should be able to perceive differences in levels of interactive experiences.

(5) evaluate and modify those definitions;

Kiousis argues that there's no need to overthrow or improve previous definitons. It's better to merge them into a single hybrid definition by eliminating all the non-essential parts.

(6) propose a conceptual definition;

Kiousis provides a conceptual definition as follows:

Interactivity can be defined as the degree to which a communication technology can create a mediated environment in which participants can communicate (one-to-one, one-to-many, many-to-many), both synchronously and asynchronously, and participate in reciprocal message exchanges (third-order dependency). With regard to human users, it additionally refers to their ability to perceive the experience as a simulation of interpersonal communication and increase the awareness of telepresence.

(7) propose an operational definition; and

Kiousis links the operational definitions with the conceptual definition:


(8) discuss the implications on future research of the arrived-at-definition.

Kiousis does not elaborate much on this topic. He simply states that definitions have been outlined that have blended the most important elements of prior conceptions into concise framework. He says that interactivity will remain a controversial concept in the literature, but it is hoped that this explication has granted a clearer picture of interactivity and how it may be studied in future investigations.

24 October, 2010

Task 5: "Interactivity. Tracking a New Concept in Media and Communication Studies." A review.

The article chosen for the review at hand, is titled „Interactivity - Tracking a New Concept in Media and Communication Studies." It's written by Jens F. Jensen in 1998, so it’s not very recent. Although the article lacks novelty, it most certainly provides an insight on when and how the term "interactivity" was first coined and what lies beneath this ambiguous word.

Newsweek was correct in 1993 when they commented on the new hype and suggested that it's a "zillion dollar industry." The predictions were correct as well - an interactive life will indeed put the world at your fingertips. Or mine for that matter.

The term interactivity was described as follows:

a huge amount of information available to anyone at the touch of a button, everything from airline schedules to esoteric scientific journals to video versions of off-off-off Broadway. Watching a movie won’t be a passive experience. At various points, you’ll click on alternative story lines and create your individualized version of “Terminator XII”. Consumers will send as well as receive all kinds of data ... Video camera owners could record news they see and put it on the universal network ... Viewers could select whatever they wanted just by pushing a button ... Instead of playing rented tapes on their VCRs, ... [the customers] may be able to call up a movie from a library of thousands through a menu displayed on the TV. Game fanatics maybe able to do the same from another electronic library filled with realistic video versions of arcade shoot-’em-ups ... (Newsweek, 1993:38).

What it means, is that roughly 20 years ago, industry professionals were able to predict the future. All of the above has become a reality.

Jensen suggests that interactivity is a "media studies blind spot." Back in 1990s none of the handbooks in the field of communication had listed the term. A shift in the paradigm had occured, but the discourse had remained unchanged. Jensen goes on to explain how interactivity can be categorized and perceived, thus providing a structure necessary for the conteporary discourse in new and interactive media.

14 October, 2010

Task 3: Similarities and differences. Process comparison.

Kristo's process

Kristo has used a very clean cut approach to describe the process of creating his study plan. It's basically a visualization of the curriculum functionality. Activities related to personal life (e.g. work, hobbies, other commitments), have not been taken into account.

On the plus side, this is what the process of creating a study plan, should look like. If a person is not engaged with work or family matters, he/she has the ability to create a study plan according to the perfect scenario and this is it. On the minus side, this is usually not the case with master's students. Most of us/them have daily commitments (be it a child, a day job or even both).

The process visualization can be viewed here:

Maarja's process

Maarja has described her studyplan creation process in great detail. On the plus side, her approach is very focused on her person and her needs. Her process is a very good example of personal time management. On the minus side, it's not universal. The process can be used as a guideline, but the outcome will be different for every person.

The level on detail regarding the process of creating a study plan, is satisfactory. It takes into account small things like color coding and large things like the curriculum. What it (the visualization) lacks, is the description of other commitments (e.g. work, family).

The process visualization can be viewed here: http://maarjapajusalu.files.wordpress.com/2010/10/task2.png

Maibritt's process

I love the fact that Maibritt has it all figured out (in great detail). She has an overview about how she's bound to spend most of her weeks/days/hours. Maibritt also has to take into account other variables aside from school (e.g. work and family). On the plus side, she has described all her variables. On the minus side, her decision making process seems a bit vague and hard to understand.


Norbert's process

Norbert's process is probably the one I can relate to. He has to take into account the same variables as I do (work, family, hobbies etc). The process description is fairly similar to the one I've described in my weblog. On the plus side, it's nice to see that there are other's who prefer the macro view of things regarding time management. On the minus side, this example lacks visualization. Not that it's necessary, but it would have been interesting to compare Norbert's process to mine.

Gert's process

Most of the participants have used a mindmap of some sort to describe show that "these things are connected" and "I make a decision based on these criteria" (as did I). What I really liked about Gert's process descripton, was the fact that he broke it all down to factors and priorities. It's a very clear way of expressing all the variables that need to be taken into account while creating one's study plan. The next step would have been to explain the personal priorities and how they affect the choices at hand. The level of detail may have been a bit greater, but other than that, it's very neat process description visualization.

The process visualization can be viewed here: http://zavatski.files.wordpress.com/2010/10/image3.png

Conclusion

After reading all the different process descriptions and looking at various process visualizations, I started noticing the things my initial mindmap was lacking. For example, I do describe priorities, but I don't name them clearly. Also, my process is considering the macro view with not enough focus on "how I decide which courses to choose." In conclusion, I've received a few great ideas on how to improve my mindmap and plan my activities more efficiently. I really liked that a lot of the participants used a structured approach (e.g. a mindmap or other visualization).

Task 2: Time management and process mapping

I've always enjoyed structuring the world around me. It is my way of solving the puzzle of life and creating order out of chaos. By applying structure to any problem/assignment, it's size and scope become easier to grasp.

Time management (creating a study plan does fall into this category) has never been (nor will it ever be) easy. See example: http://wulffmorgenthaler.com/strip.aspx?id=696e818d-68b1-4dfc-9365-c6b822fc518f

Nowadays day planners have been replaced with interactive tools (such as Outlook Calendar/Google Calendar and interactive tasklists), but time management has not become easier. With the help of new tools (and in a new interactive age) we are simply able to do more, but the amount of things that require our attention, is horrific.

While planning my activities, I must take into account all of the following:

  • My day job
  • My two companies
  • IMKE curriculum
  • Driving school
  • Family and friends

There are probably more variables to this equation (creating a study plan), but the above list is essential.

My day job is a "must be" element in this equation - 8 hours per day, 5 days a week. Luckily the organization supports self improvement, thus attending a few important lectures/exams occasionally is not a problem. That covers work and school (IMKE curriculum/master's studies). The two companies and related activities are currently on hold. Everything related to the driving school is scheduled either before or after work. If there's any time left, I usually spend it with my family/partner and friends.

An explanatory (XMind) mindmap portrays the situation more clearly (click to enlarge):


07 October, 2010

Task 1: Previous experience with webpublishing

Everybody remembers the 90's - computer screens were small, cellphones were big and anybody who was anybody, had to have a personal webpage. At least that's how it was for the digital natives while growing up. In that sense I was no different. I started experimenting with MS Frontpage and Macromedia Dreamweaver. I learned how to manipulate HTML and became very interested in ICT.

After a short while, content management systems (CMS) were the new cool thing. CMS's made the web more accessible for anyone who had something to say, but did not have the technical competence to publish said information on the web. A few of these content management systems became more famous than others. Wordpress, for example, was first released in 2003. Blogger.com was first launched in 1999. Another popular CMS (at least back in the day) was b2evolution (http://b2evolution.net/). It's still available, but has become less popular due to vast competition.

Regarding personal experience, I can only guess the number of CMS's I've tried. But it's safe to say that I've tried at least 50 different webpublishing platforms (CMS, blogging, forum software). It may seem a lot of hassle, but in reality, it's not hard to try and test several webpublishing platforms during a single day.

But webpublishing does not end with CMS's. We have Twitter, Facebook, LinkedIn, Orkut, MySpace and many more. The creator of Facebook, Mark Zuckerberg, suggested that the future of the web is social. In other words, webpublishing in all it's variety is bound to become even more popular than it is today. People will continue to express themselves online, be it pictures, videos on texts.

IMKE students use wiki's and blogs (as do I). They (IMKE students) have a Facebook fanpage to share information regarding course's and one can't access the information without having a Facebook account. Facebook is a must have platform due to networking purposes. People do publish pictures, texts and videos and seek information about their friends and family, but the platform may be perceived as a marketing and networking tool as well.

It's very hard to quantify experience, but it's safe to say that I use most of the popular webpublishing tools out there today.


19 December, 2009

A blogged review of "Comments and Ethics"

"Comments and Ethics" was written by Andris Reinman and Raul Reiska, under the GPL v3 licence, using the code.google.com wiki environment. The team paper can be found at: http://code.google.com/p/ethicsandlawinnewmedia/wiki/MainPage

Since the team consisted of only two people, some concessions regarding the length of the paper were expected. Although the length of the paper was expected to be in accordance with the number of the team members (and it was), in-depth analysis was still expected.

The first positive thing about the team paper involved the clear distinction of producing text under a specific licence (General Public Licence version 3). The use of local and region specific materials was also well received. Since Estonia has a good history regarding freedom of the speech in online media, the analysis was valid and spoken for.

The paper had less structure than expected, there were two individual contributions rather than a single team paper. Although, it's not necessarily a bad thing. Since the team consisted of only two members, it was easier to evaluate individual contributions this way. There was very little evidence to support the fact of cooperation.

The fist contributor, Andris Reinman, wrote about "anonymous bashing" and excelled in creating a decent overview on the said topic. Although, some of the translations were a bit raw or even missing. The contribution was good, but it would be a bit hard to understand if one is not from Estonia (e.g. "Delfi eelnõu" was not translated, but only explained).

The second contributor, Raul Reiska, wrote about "the internet phenomenon" also known as meme. Although the data presented in the contribution is correct and interesting, it's hard to see how it relates to the topic at hand.

In conclusion, the contributions were worth reading and the authors have done their part, but some additional cooperation among the team members would have resulted in a better team paper.

Analogue aliens, digital friends

Kaido Kikkas writes that:

"The Internet can be a serious chance for disadvantaged people. If some young lady meets a young man who is using a wheelchair, then in 'real life', it takes some courage to even think about any closer relations." The Internet offers some additional options regarding this scenario. While online, these two people are considered equal, thus enabling to get to know the other person without bias.

As the above video so humorously illustrates, the Internet can be used to create an illusion of oneself. A digital persona, if you may.



In terms of minorities more and more elderly people have started participating in online discussions (newspaper commentaries), thus making their voice more visible. For example, a well-known IT advisor and journalist Arvo Mägi is currently 74 years old. The Internet has allowed the elderly to reinvent themselves and feel more in touch with the world today.

18 December, 2009

Against intellectual poperty, strategies for change

Brian Martin suggests the following strategies to rebel against IP:

1) Change thinking. "The way that an issue is framed makes an enormous difference to the legitimacy of different positions. Once intellectual property is undermined in the minds of many citizens, it will become far easier to topple its institutional supports."

2) Expose the costs. "It can cost a lot to set up and operate a system of intellectual property. And once the figures are available and understood, this will aid in reducing the legitimacy of the world intellectual property system."

3) Reproduce protected works. "By trying to hide the copying and avoiding penalties, the copiers appear to accept the legitimacy of the system."

4) Openly refuse to cooperate with IP. "Once mass civil disobedience to intellectual property laws occurs, it will be impossible to stop."

5) Promote non-owned information. "Until copyright is eliminated or obsolete, innovations such as copyleft are necessary to avoid exploitation of those who want to make their work available to others."

6) Develop principles to deal with credit for intellectual work. "The less there is to gain from credit for ideas, the more likely people are to share ideas rather than worry about who deserves credit for them."

All of the above principles seem a little bit radical on black and white, but a large portion of people already acts according to these principles. Perhaps one of the more radical suggestions includes the notion to openly refuse IP. And it may also be a bit hard to expose the costs, but all the other principles seem sane enough to work if given the chance and time.

The fact that people are already adapting to these new practices regarding IP, shows that the consumers dictate the intellectual poperty development mechanisms.

The following videoclip has Stephan Kinsella talking about IP and libertarianism:

Too much force, the scoop on digital enforcement

The following video humorously illustrates the concept of DRM in the real world:



As many people point out, there's nothing wrong with the idea of creating copies of the materials one has purchased. This is a very common practice among people to preserve the acquired data carrier (e.g. DVD, CD) and use the copy instead for everyday viewing.

If a person has paid for it, he/she should reserve the right to reproduce or modify the data to a certain degree for personal usage.

Software licensing landscape in 2015

The following video humorously illustrates the harrowing clauses in modern software licesing.



The given perspective of five years (2010 - 2015) offers us a very short timeframe. During those five years, a lot of the applications are bound to go online. Google Inc. has shown remarkable success in producing online software (Google Docs, Chrome OS) and this a growing trend.

This is something that may have an impact on proprietary software and on software licencing in general. Today Microsoft doesn't have to worry about free Unix based platforms, since there is a much bigger threat on the horizon - Google.

Divide and conquer, the digital divide in Estonia

Praxis (an institute for political research) determined that the barriers in internet usage in Estonia include: financial and emotional reasons and also the lack of skills. (see source)

Digital immigrants have been forced to embrace the internet in every step of the way. Doctors, nurses, teachers, policemen and people from many other fields have been forced to comply with the new rules of submitting information. This shift in methods has been fairly quick, but in many cases painful.

The following videoclip illustrates the severity of the digital divide and investigates a few ideas on how to overcome it.



Although Estonia has a very high rate of internetization and access is granted even in the rural areas (e.g. public libraries, RDSL solutions, WiMax solutions, community WiFi-s etc.), internet is not regarded as a useful tool by the digital immigrants. Reading the online newspapers and such, is what it's mostly used for. So, even by having the technology and the possibilities, people are still unable to overcome the digital divide, they still require training.

The following videoclip offers some insight regarding the economic impacts of the digital divide.

11 December, 2009

Essay: Social media and the decline of privacy

Introduction

The essay at hand deals with four major new media aspects: social media, constructivism, privacy and security. The nature of these aspects and their relation to each-other will be discussed in detail. The emergence of new interaction patterns and social infrastructures has created a situation where people are able express themselves freely, communicate without boundaries and also expose themselves to a variety of risks.

The author of this essay is fascinated by the idea of overexposure resulting in a zero privacy world. In other words this essay sets out to prove that the cumulation of personal data on the internet is beyond our control, thus encompassing risks regarding our privacy and security in general.

On December 25th 1990 Tim Berners Lee was able to implement the first successful communication between an HTTP client and a server via the Internet. Thus creating the World Wide Web. (Lee 1990) But that was the beginning of Web 1.0. The definition of Web 2.0 (social media) was first mentioned by Darcy DiNucci in her article "Fragmented Future." (DiNucci 1999)

It's been roughly 10 years since the emergence of social media. In terms of private information people have neglected to hide their: phone numbers, social security numbers, home addresses, e-mail addresses, work related data, credit card numbers, real names, sexual preferences, hobbies, financial status, names of family members and friends, health information, licence plate numbers etc. What's going to happen during the next ten years?

Social media and constructivism

When discussing social media and how it relates to privacy issues, we must first define the term itself. Andreas Kaplan and Michael Haenlein state that social media is "a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of user-generated content" (Kaplan & Haenlein 2010). The mention of user-generated content is of great importance. That is the mechanism from which all the privacy issues derive from. Users are the ones to generate content regarding themselves and their peers.

These Internet based-applications include blogs, social networking sites, learning environments, wikis, photo and video sharing environments, audio and music sharing sites, bookmarking sites and many more. Simple asynchronous interaction mechanisms have been replaced by more complex collaboration based systems in order to create content and avoid delays in the process. It has become surprisingly easy to add information into different networks.

By adding information and generating content, users engage in a reciprocical process of creating knowledge for each-other. This notion brings us to the concept of constructivism, thus we have to define it. One can agree with McMahon who states that social constructivism emphasizes the importance of culture and context in understanding what occurs in society and constructing knowledge based on this understanding (McMahon 1997).

When we analyze social networking practices, similar patterns emerge. Social media can be viewed as a micro-society, nevertheless all conventional rules apply (including peer pressure). Social networking sites thrive on this knowledge and use it as a binding mechanism for users. If a user decides to join a group/community, he/she is instantly given the opportunity to invite his/her friends to the same group. The same principle applies to all user-generated content. It's very hard to say "no" to a friend, thus people willingly agree to cocreate content.

The decline of privacy

Digital natives are people who "grew up with internet and technology." Being "online" all the time has become a necessity. Blogging about their thoughts, using Twitter several times a day to send short status updates, participating in social networking environments (Facebook, Orkut, Friendster), using instant messaging, sending and checking e-mails and participating in social activisim initiatives - this is the new reality.

The amount of content creation by digital natives is so immense that nobody really has the time to control and censor the sensitive data. Tweeting about one's vacation may result in burglary. Leaving a complete profile of yourself online may result in malicious social engineering practices and scams. Leaving outdated childish information online about oneself can result in being turned down for a job.

Digital natives tend to maintain their relationships by using social networking sites. It's not uncommon at all to have detailed personal information about someone you knew 10 years ago and haven't seen since. So, having 500 or more "friends" on Facebook is not really an achievement. But each and every one of these people has access to sensitive information about the user he/she befriended.

Human resource executives, insurance companies, schools and many other institutions and individuals constantly monitor social media applications to gather intelligence on the prospective employees, students, partners etc. Using Google for "background checks" is very common and often implemented in business circles. People have learned how to use social media to their advantage and that tendency is growing rapidly.

But the decline of privacy doesn't stop there. Since people rarely read end user agreements (EULA-s), they may be unaware of how their data is being used for profit. For example, Google Inc. reserves the right to use the profile information of Orkut users for advertisment purposes. Google also records all the search queries by default, this function has to be turned off manually. There are very few aspects of person's online life that remain unrecorded by some entity.

The Google EULA states that: "By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive licence to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services."

Security risks regarding openness

Sonia Livingstone is the Head of the Department of Media and Communications at the London School of Economics and Political Science (LSE). In 2008 she published an article titled "Taking risky opportunities in youthful content creation: teenagers’ use of social networking sites for intimacy, privacy and self-expression" (Livingstone 2008).

This article holds a great deal of relevance, since teenagers are the biggest risk group. Often times young people tend to overlook signs of danger. Since teenagers are very keen on self-presentation, they are the one group that is most likley to unveil excessive information about themselves.

In the introduction Livingstone states that "it is commonly held that at best, social networking is time-wasting and socially isolating, and at worst it allows paedophiles to groom children in their bedroom or sees teenagers lured into suicide pacts while parents think they are doing their homework" (Livingstone 2008).

While Livingstone's views may be a bit grim, she is still correct in terms of not underestimating the new playfield. How would one study the phenomenon of social networking when such infrastructures have only existed for the past 10 years? The apparent anonymity of the Internet is both a curse and a blessing. While being able to express oneself freely, people are still not in charge of the information distributed about their person. So, how can we fix it?

Conclusion

The most efficient solution would probably be "educated media consumers." Although media classes are appearing here and there in different curriculums, the focus on new media is very small. Trial and error practices will continue to flourish until social media practices remain unstandardized. At one point online security will probably be an individual course in most schools.

Kevin Mitnick has said that "security is too often merely an illusion, an illusion sometimes made even worse when gullibility, naivete, or ignorance come into play. In the end, social engineering attacks can succeed when people are stupid or, more commonly, simply ignorant about good security practices." (Mitnick 2002). One could agree with Mitnick by saying that common sense is the best tool we've got in terms of protecting ourselves on the World Wide Web.

Even though Google has "removal tools" for removing inappropriate content, this method remains ineffective. Getting rid of bad content is a very time-consuming endeavour. Implementing a "removal-tax" would not be a good solution either, since some people may want to erase their criminal records or sex offender statuses, thus creating more harm than good.

The problem of "being naked" and "having no way to solve it" still remains. The only ones that are able to do any damage (control) are the content creators themselves.

Social networking sites nowadays have "privacy options" - if the user doesn't want to share his/her pictures with her family or collegues, he/she can opt to do so. But within this constant flood of infotainment, people rarely have the chance to make these modifications. If the beformentioned user has 500 friends in Facebook, it would take him/her a very long time to categorize these people in order to implement different privacy settings. So, people take calculated risks.

Tapscott & Williams argue that the youth today are active creators of media content and hungry for interaction, but also tend to value individual rights, including the right to privacy and the right to have and express their own views (Tapscott, Williams 2006, 47). Perhaps this need for privacy is the driving force behind what would one day be known as the "educated online media consumer." It's only logical to assume that people adapt during time.

References

1) Tim Berners Lee. (1990). WWW project history. Available: http://www.w3.org/History/19921103-hypertext/hypertext/WWW/History.html. Last accessed 11 December 2009.

2) Darcy DiNucci. (1999). Fragmented Future. Available: http://www.cdinucci.com/Darcy2/articles/Print/Printarticle7.html. Last accessed 11 December 2009.

3) Kaplan Andreas M., Haenlein Michael. (2010). Users of the world, unite! The challenges and opportunities of social media. Business Horizons, Vol. 53. Issue 1, p. 59-68.

4) McMahon M. (1997). Social Constructivism and the World Wide Web - A Paradigm for Learning. Paper presented at the ASCILITE conference. Perth, Australia.

5) Sonia Livingstone. (2008). Taking risky opportunities in youthful content creation: teenagers’ use of social networking sites for intimacy, privacy and self-expression. New Media & Society. 10 (3), 393–411.

6) Kevin Mitnick (2002). The Art of Deception. New York: John Wiley & Sons. p. 12.

7) Tapscott, D., Williams, A.D. (2006) Wikinomics: how mass collaboration changes everything, New York: Portfolio.

Environments: Linked in ... to what?

Linked in is both a collaborative environment and a social network for people who wish to define and map their work-related contacts. The Linked In website states that "over 50 million professionals use LinkedIn to exchange information, ideas and opportunities."

Through your network you can (see reference):

1) Manage the information that’s publicly available about you as professional
2) Find and be introduced to potential clients, service providers, and experts
3) Create and collaborate on projects, gather data, share files, solve problems
4) Be found for business opportunities and find potential partners
5) Gain new insights from discussions with likeminded professionals
6) Discover inside connections that can help you land jobs and close deals
7) Post and distribute job listings to find the best talent for your company




As the above video sarcastically states, the "network of professionals" may be a bit misunderstood by some groups of people. The theory of constructivism states that reality is not something that exists on it's own, reality is constructed and thus perceived as something that is. So, by linking ourselves to different people, we create an illusion we wish other's would perceive as reality.

Don Tapscott and Anthony D. Williams stated in their book (Wikinomics. How Mass Collaboration Changes Everything) that "recently, smart companies have been rethinking openness, and this is beginning to affect a number of important functions, including human resources, innovation, industry standards, and communications." Today companies that make their boundaries porous to external ideas and human capital outperform companies that rely solely on their internal resources and capabilities (Tapscott & Williams, 2006).

In other words, networking on different levels (personal and/or organizational) may prove to be fairly profitable if implemented correctly.

Media & economy: Virtual sex, real money

DISCLAIMER: The video below is "office safe."

At first it may seem a little pretentious and arrogant to talk about sex in relation to new media and economy. While learning to design iPhone applications I came across a book titled "Building PhotoKast: Creating an iPhone app in one month."

There is a chapter in that book about "Designing the 7 deadly sins." The concept is fairly simple: if you want your product/idea to be successful, you have to focus on one or more of the seven deadly sins (lust, gluttony, greed, sloth, wrath, envy, pride). As the video below clearly illustrates, "virtual sex" is a very popular commodity.

Just look at the facts:

1) 12% of all websites are pornographic
2) 25% of search engine requests are pornographic
3) 35% of all internet downloads are pornographic in nature
4) "Sex" is the most searched word on the internet
5) US revenue from internet porn in 2006: $ 2,84 billion
6) ... more information available in the video


Privacy & security: The risk of being human

Although there are many software and hardware related privacy/security risks, there's one risk that experts know about, but tend to overlook - the human factor.

Kevin Mitnick has said that "companies spend millions of dollars on firewalls and secure access devices, and it's money wasted because none of these measures address the weakest link in the security chain: the people who use, administer and operate computer systems."

In the following video Kevin Mitnick describes in high detail how he managed to fake his way into LA Telco Central Office by using his skills in social engineering.



Mitnick wrote in his book (The Art of Deception) that "security is too often merely an illusion, an illusion sometimes made even worse when gullibility, naivete, or ignorance come into play. In the end, social engineering attacks can succeed when people are stupid or, more commonly, simply ignorant about good security practices."

Anyone who thinks that security products alone offer true security is settling for. the illusion of security. It's a case of living in a world of fantasy: They will inevitably, later if not sooner, suffer a security incident (Mitnick, 2002).