Towards technical aspects of CSCW: Web 2.0 and Collaborative Visualization

December 25, 2015

Many years have passed after the advent of web 2.0, there have been lots of conferences, journals and articles around it and many individuals on the web and in their handwriting have written a lot about it, and we have experienced many web 2.0 websites specially after virtual social media became popular, but still many don’t have a consensus on what it exactly mean and cast doubt on claims of many websites which have used to count themselves as members of this wild.

2-web-2-0-development

In fact, web 2.0, just like many other areas, doesn’t have a specified boundary, but it has some common principals which constitute the core of this wild and we could say that the more of these principals a particular web-app follows, the more similarity to a web 2.0 app it will have. Let’s take a look at some of the most important principles and provide some details and examples of them:

  1. The Web As Platform
    Some companies started their existence on the web and became totally web-based, like Google and Ebay. They have important differences with Microsoft-like companies (considering what Microsoft was at the time of publishing this reading), so that they have multi-user (or better to say many-user) applications, they are not selling software packages, but they give service, they are involved with huge amount of data and they deliver their service over a network, called internet.
    Much of the web 2.0 success come from a concept called “the long tail”. It refers to the fact that if the power of many small sites become collected together, it could make an enormous power, the Idea which many successful web-based companies like search engines, advertising networks, web analytics services, etc. makes use of. The author says:

    The Web 2.0 lesson: leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head

  2. Harnessing Collective Intelligence
    So why doing something by inaccurate incomplete automated algorithms, while it could be done by the users in the best way, without any special effort?
    For example, in many websites we need to classify our large amount of data. We could define some predefined categories, so that every user who wants to generate new content, uses those categories. but some web 2.0 websites let users to define those classifying points as tags, hashtags and topics. Delicious, Flickr, StackExchange Q&A network by supporting tags, and now social media sites like Facebook and Twitter by supporting hashtags introduce folksonomy instead of static taxonomy-based classifying web content.
    As another example,  Google translate makes use of contribution of its users to  learn more and improve level of its translations.
    Or Gmail makes use of “report as spam/fishing” to learn from users which content is likely to be unwanted and bothering for users.
    Sites like 4square, Tripadvisor, etc. benefit from user reviews and lots of other examples which benefit from users’ intelligence in other aspects.
  3. Data is the Next Intel Inside
    Web 2.0 apps are almost always involved with large amount of data, so that they are sometimes called infoware rather than software. Many of these web-apps have giant distributed databases spread out in different locations to provide higher level of performance/scalability/reliability/accessibility. This data could be owned by the corporation itself, or owned by the users like online drives such as Google Drive.
  4. End of the Software Release Cycle
    Internet era software is that it is delivered as a service, not as a product. This fact leads to a number of fundamental changes so that the functionality of the application becomes its core competency and because the users are helping us to improve the quality of our service and enrich the data we have they are kind of our co-developers, in a reflection of open source development practices.
  5. Lightweight Programming Models
    Supporting lightweight programming models allow systems to be more loosely coupled. By providing providing a simple programming interface we will let other applications easily communicate with ours, and by using other available services and open-source free software, we would benefit from what they are providing us without having to reinvent the wheel again.
  6. Software Above the Level of a Single Device
    Although even a basic web-app is made of 2 devices, one hosting and one client device, but in web 2.0 we see web applications being used collaboratively by many users, as well as enabling us to work with them with different devices like PCs, web browsers, smart phone apps, tablets and even TVs and other multimedia devices. So we are able to use them from anywhere while synchronizing our data and having access to our last changes to data, if there’s any.
  7. Rich User Experiences
    There have been many progresses in technologies being used in web-apps to improve user experience after advent of web, so supporting more CSS effects and then introducing CSS3, advent of Javascript to make more dynamic client-side events and then Flash for GUI-like experiences. But we could say that there was a revolution in the web after advent of Ajax which itself is a combination of several technologies such as DOM, data transfer using XML and XSLT (and now JSON), asynchronous data retrieval using XMLHTTPRequest object and Javascript. Ajax was aimed to bring desktop-like user experience, so that users don’t have to wait much or do special things to view further changes in their web browsing experience as well as not spending much bandwidth and resources to recalculate what has been processed before. Google by introducing Gmail and Google maps started this revolution in a significant way and nowadays  we see that all modern social networks as well as popular services like Google analytics, online word processors used in Google drive or Microsoft OneDrive and many other websites for different applications such as type-ahead search result displaying and auto loading the rest of document on browser scrolling, etc. use that.

We can categorize web 2.0 artifacts into several major groups, so we are going to introduce 3 of these groups:

  1. Social Media: Actually social media isn’t a new concept and it was commonplace in the past. But why is news-making now? Because in 19th and 20th century we had some broadcasting tools like TV and radio that just a small amount of people generated it’s content  – which is the opposite side of social media – so now we are again back to social media, but now it’s very cheaper, thanks to internet, with ability to reach your voice to a big audience in a twinkling of an eye. So it’s not a waste of time because it helps us to communicate with each other better and have easier access to public information. It could help some revolutions to take occur because it facilitates communication and coordination, but it actually doesn’t “cause” revolutions and other unwanted (or maybe wanted?!) incident to take place and most important of all, it’s not a fad because it existed in old history. Actually, the mass-media era was a historical anomaly. So we have had equivalent historical instances for all concepts like blogs, microblogs and even Instagram and they were pamphlets, coffeehouses and image albums!
    Even though social media might be changed again in the form, like it was changed in modern era, but it will permanently remain.
  2. Online Wikis:  Wikipedia as the major example of online wikis, was founded to bring a free encyclopedia for every individual. it uses free license, so everyone can use it freely, or even for commercial purposes and could copy-paste and share its content. It has no other budget sources except for donations from public, people who use it and love it. However it doesn’t need much funding because most things or self-managed or better to say user managed. So it only have 1 employee which is a software developer and its servers are managed by volunteer system admins and the whole foundation is organized by a ragtag bound of people.
    To maintain the quality of service, it mostly relies on some social policies as well as some software features. The most important of them is “neutral point-of-view policy is the most important”  so they don’t tend to write the truth about different topics, because everyone could have different ideas about what is the truth. It tends to write “what is evident and what has high-credit references”.
    Finally, social rules are left completely open ended in the software. so that there’s not a voting system to decide about which edit should be reverted or not and instead, users write their reasons for the opinion they are holding and by analyzing them, the final decision will be made manually.
  3. Open-source software community: open-source software is popular because it is transparent, benefits from collective efforts of many individuals to be better and in many cases, provides better quality even compared to commercial products. Examples like Apache and Nginx webservers, MySQL and Postgres DBMSes, and many other development tools verify their more popularity compared to paid products at least in software-base businesses. This fact refers to the “harnessing collective intelligence” principal we mentioned in previous section which shows the power group-work and that’s why a giant search engine like google count more on number of backlinks to a particular website or the click impression rate of its search results than any other factors to provide rankings for websites on different search queries.

Collaborative Visualization:

Collaboration has been mentioned as one of the biggest challenges for visualization and visual analytics because the problems that analysts face in the real
world are becoming increasingly large and complex and involving a broader scope.
Additionally, interaction with digital information is increasingly becoming a social activity and social media could be good example for this.

We should also mention that traditional visualization techniques and tools are typically designed for a single user interacting with a visualization application on a typical computing device, not to be used in-groups and on novel technological artifacts. For all these reasons, collaborative visualization seems to be a growing area of research end even with more speed in future.

While collaborative visualization benefits from work in other disciplines, there are many challenges, aspects, and issues that are unique to the intersection of collaborative work and visualization which are supposed to be handled in this research area. There are several definitions for collaborative visualization:

There’s an old definition for it which emphasizes the goal of researching in this area:
“Collaborative visualization enhances the traditional visualization by bringing together many experts so that each can contribute toward the common goal of the understanding of the object, phenomenon, or data under investigation.”
But recently, the term social data analysis has been noticed to talk about the social interaction that is a central part of collaborative visualization:
“[Social data analysis is] a version of exploratory data analysis that relies on social interaction as source of inspiration and motivation.”

The authors of the article define collaborative visualization as this:

Collaborative visualization is the shared use of computer-supported, (interactive,) visual representations of data by more than one person with the common goal of contribution to joint information processing activities.

which is derived from a general definition for visualization as:

The use of computer- supported, interactive, visual representations of data to amplify cognition.

 

2-cove

Then it provides some important application scenarios of collaborative visualization such as its usage on the web to provide visualization for large amount of data for many users and its applications in scientific research, command and control, environmental planning and mission planning.

In future, we will have more data collected from more variety of sources. We will have more people willing to view and analyze this data simultaneously and they will want to access it through more novel devices. We need to present this data in a manner which is more summarized and more understandable to them.

If you need more information about collaborative visualization, reading this web page could be useful:
http://yon.ir/RIuv

Leave a Reply

Your email address will not be published. Required fields are marked *


*