söndag 29 december 2013

Week #6 - Qualitative and case study research (post-reflection)

The last week's theme of the course was Qualitative and case study research, where we - yes, talked about qualitative studies, and case studies. As always - interesting discussions sparked during the two seminars this week. We reviewed our selected papers and talked about the vast number of qualitative methods one can use and the benefits/drawbacks of them, and some examples of how to use them properly.

Few students (four or five) showed up to the second seminar, unfortunately - but looking on the bright side of it, we got more time to dissect our selected papers to find out whether they were actually case studies or not. Truth to be told, we all had a hard time being able to define what actually constitutes case studies, and agreed that the wiki definition was a bit off. It seemed to say all and nothing at the same time, and I unfortunately can't say I fully understand the difference of a case study with a study similar to a case study.

What can be said, is that you, in case studies, study a topic, person, whatever - in a holistical sense, meaning you look at the whole well-defined area, instead of just fractions or parts of it, which is common in a lot of research. We also learned that case studies can be very convenient if you lack resources or time (yes, I'm looking at you, master thesis). Confusions aside, I did find the seminars of last week interesting - and the course in whole as well.

fredag 13 december 2013

Week #6 - Qualitative and case study research (pre-reflection)

Week6 - Pre-reflection
Select a media technology research paper that is using qualitative methods. The paper should have been published in a high quality journal, with an “impact factor” of 1.0 or above. The following are examples of questions to discuss in your blog posting:
  1. Which qualitative method or methods are used in the paper? Which are the benefits and limitations of using these methods?
  2. What did you learn about qualitative methods from reading the paper?
  3. Which are the main methodological problems of the study? How could the use of the qualitative method or methods have been improved?
Selected paper: Carey L. Higgins-Dobney and Gerald Sussman (2013). “The growth of TV news, the demise of the journalism profession”. 
The paper describes a paradigm shift in the news media coverage in the US, where more and more news hours are being produced by less and less staff members, by new technology and by a more multitasked news staff, as well as cut-downs in manpower (full-time employment reduced to part-time). They argue that this “seriously” weakens investigative news reporting as well as the overall news quality. They conduct the research by a study of how the media corporations spend their resources, as well as do qualitative interviews by staff members of these corporations. “We spoke with long-time newsworkers about their experiences, which invoked such issues as technology-based layoffs, reductions in status (full-time to part-time), reduced real income and benefits for crew, multitasking without commensurate pay, disregard for professional knowledge and experience, and abrupt dismissals of long-term talent and other employees.
I found the piece very interesting, yet not at all controversial - this was just another stone in the wall of a rapidly changing media landscape. What I did find interesting was the layout of the study - it differed quite a lot from the rigid IMRAD structure of intro->background->literature->theory->method->results->conclusion structure I’ve almost always seen so far. This study, albeit based on IMRAD, had a more “loose” dramaturgy, which made it easier to read, but harder to skim through. I don’t know if it’s just a matter of preferences, but it felt less “research-y” when the chapters were named like the titles in a book (eg. "The Axeman Cometh"), rather than the conventional way. Also - a lot of the conclusions were made through the interviews they made, but the only thing apparent to the reader was loosely grabbed quotes, which indicate that one should be critical of how they’re being used - as well as who they’re interviewing.
What we found through research, personal experience in the newsroom, and interviews with other members of the TV news industry in Portland (and elsewhere) is that there is a close connection between the economics and the new technologies of news production on the one hand and the reduction of news staff, the declining quality of news, and deteriorating public trust in the TV news function.

===
Read the following article:
Select a media technology research paper that is using the case study research method. The paper should have been published in a high quality journal, with an “impact factor” of 1.0 or above. Your tasks are the following:
  1. Briefly explain to a first year university student what a case study is.
  2. Use the "Process of Building Theory from Case Study Research" (Eisenhardt, summarized in Table 1) to analyze the strengths and weaknesses of your selected paper.
A case study is a type of research in which the empirical data comes from studying defined cases. It can be quantitative as well as qualitative, or a mixture of the two methods. G. Thomas proposes the definition as “Case studies are analyses of persons, events, decisions, periods, projects, policies, institutions, or other systems that are studied holistically by one or more methods. The case that is the subject of the inquiry will be an instance of a class of phenomena that provides an analytical frame — an object — within which the study is conducted and which the case illuminates and explicates.” (Thomas, 2011)

The research I’ve chosen using the case study method is called “Policy failure or moral scandal? Political accountability, journalism and new public management” by Monika Djerf-Pierre, Mats Ekström, and Bengt Johansson, all from University of Gothenburg, Sweden, and was published in journal Media, Culture & Society, with an IF of 1.092.

The paper does a case study where it tries to “examine how journalism does ‘accountability work’ in a political setting marked by new public management”. The case is the Carema scandal which had the public light shined on it in 2011. The authors do this by analyzing 156 news items, publish time ranging from 1 October 2011 to 31 December 2011. The sources were Dagens Nyheter (86 articles), Aftonbladet (32 articles), Rapport (21 reports) and Nyheterna (17 reports).

The step-by-step “guide” Eisenhardt, K. M is followed thoroughly by Djerf-Pierre, M. et al, but of course focuses more on some steps, and less on others. They do a thorough data collection and analysis, and bases most of their research on this. Of course, they have some literature research as well, but this is not at all as important. They only use data from four sources, as previously mentioned - and while these media do have different political stances, one could argue that they miss out on a lot of views by not taking into account the myriad of alternative news sources, such as editorial blogs, pundit blogs, and micro blogs - all a part of the new branch of journalism, and rapidly forming the discussion in the public sphere. The thesis is strenghtened by the fact that they use newspapers as well as news shows. One weakness, which the authors mention in their discussion, is that they’re not certain their conclusions can be applied to a more general discussion on how public accountability is a concern of journalism - since they only investigated the Carema case.



References

  • Eisenhardt, K. M. (1989). Building Theories from Case Study Research. Academy of Management Review, 14(4), 532-550.
  • Monika Djerf-Pierre, Mats Ekström, and Bengt Johansson. (2013). “Policy failure or moral scandal? Political accountability, journalism and new public management” Media Culture Society 2013 35: 960
  • Carey L. Higgins-Dobney and Gerald Sussman (2013). “The growth of TV news, the demise of the journalism profession”.
  • G. Thomas (2011) A typology for the case study in social science following a review of definition, discourse and structure. Qualitative Inquiry, 17, 6, 511-521

onsdag 11 december 2013

Week #5 - Design research (post-reflection)

This week was about Design Research, and the tutoring consisted of two lectures. The first one was with Ylva Fernaeus and revolved around her research concept called actDresses. The lecture started with Ylva presenting her research, giving us a brief understanding of what they did. After having read the paper, this was a bit superfluous - but could be interesting for a student who hasn't read her research. We then got a crash course in the area of semiotics, after which we got into a not-so heated discussion about what qualitative research can be, and specifically in Fernaeus' case, that the design concept itself could be enough empirical data to be accepted and good research. You don't have to have huge user studies, cross-nation surveys with thousands of respondents, brain scan images and tests with tons of multi-axes graphs in different color schemes to be accepted by the research community. I found this interesting, and for me, not being the type of future engineer who will do my research frolicing in optimizing signal theories using experimental math - it bode well. 

Our second lecture was by Haibo Li, and didn't focus at all on the research paper on the vibrotactile football match we've read by him and his colleagues. Instead, Li taught us how to become famous and great researchers. Amongst other things. But more on this later. Basically - Li's lecture was a very hands-on approach on how to conduct good research, and maybe make some money along the way. For this, he stressed "we need the businessman" - a person who can see a good idea but who can answer questions such as "is this breakthrough technology?", "does it address a real pain point?", "is the timing right?", and "can we exploit the opportunity for the long term, or would this market commodotize so quickly that we wouldn't be able to stay profitable?". But before this - we need math, according to Haibo, as a solid foundation on which we can build our research. Also, prototyping helps, if you want to sell your research.

So, how do you become a great researcher? Enter Haibo's theory, which says that researchers who want to become famous spend 90% of their time solving the problem and 10% defining it. If you instead want to be a great researcher, it's the opposite - spend 90% of your time defining your problem, and 10% solving it. 

fredag 6 december 2013

Week #4 - Quantitative research (post-reflection)

The week consisted of two seminars - one with Stefan Hrastinski and one with Olle Bälter. In the first one, led by Hrastinski, the group of people who showed up were asked to outline the "core elements" of the quantitative research we've been asked to choose, by drawing a diagram of the key points which, hopefully, could show some kind of causality in the research. It was an interesting excercise, and proved to require good knowledge of the research paper.

The second seminar, by Olle Bälter, started off by Bälter teaching us the rules of the game 'Boggle', after which the rest of the seminar was conducted in a Boggle-esque competition with four teams. The groups were to come up with as many unique pros and cons of qualitative and quantitative research - as well as unique aspects of performing questionnaires by web, or in a physical manner. Some interesting discussions sparked and different views were weathered, which - at least for myself - was intensified by the competitive nature. It was claimed that there was an environmental benefit from sending questionnaires by e-mail instead of mail, but I replied that this is not as clear as intuition first might propose. The environmental impact of data traffic is not something the intuition of most of us is capable of dealing with, because its natural complexity and many co-operating 'hidden' elements; my computer, the data center, the electrical companie's emission, etc. Anyway, there was no general consensus about which one was the lesser of two evils, so I think we agreed to take the scientific road and adopt an agnostic view, until we've done more research on the subject.

We also were presented with some bad examples of how quantitative research had been conducted with a survey for the staff members of KTH. "Go great lengths to avoid the word 'not'", and avoid ambiguity in the questions, were two of the hands-on advices we got. All in all, I think this was a good week.

torsdag 5 december 2013

Week #5 - Design research (pre-reflection)

For this weeks theme, we've read the two texts "Comics, Robots, Fashion and Programming: outlining the concept of actDresses" by Ferneus, Y., and Jacobsson, M, and "Turn Your Mobile Into the Ball: Rendering Live Football Game Using Vibration." by Réhman, S., Sun, J., Liu, L., & Li, H. They both tackled two different aspects of Design research - the first one has more of a multidisciplinary character; focusing on the 'soft' aspects of HCI - mainly semiotics - while the latter text was of a more technical nature. One thing they both had in common is that both researches included producing and evaluating a prototype. The concept of actDresses in the paper by Ferneus and Jacobsson [1] included three prototypes for interacting with three different kinds of robots in a physical manner, while the latter evaluated if a vibrotactile system could be a viable method for watching football games. [2]
I found both of these texts interesting to read, although more prerequisite knowledge regarding for example semiotics wouldn't have hurted in the text by Ferneus and Jacobsson - it took me a while to grasp their concept and the results. Reading the paragraph "Earlier studies points out that even technomorphic looking robotic appliances can engage users ‘socially’. In the case of Roomba, as with Pleo, specially designed cloth covers are available for purchase on the web. The main usage of such clothes may on the other hand not primarily be for functional purposes, but for personalisation and decoration." made me recall a scene out of the TV show "Parks and Recreation" where Aziz Ansari's character Tom Haverford has personalized his Roomba - strapping an iPod to it, naming it DJ Roomba - making it less robot-like, causing people to think it has a personality of its own. "DJ Roomba" is a reoccusing "character", and has a Facebook page with over 6000 likes [3]. The scene can be found at the bottom of this page.

The role and necessity of prototypes in Media Technology research is hard to dispute. Being that Media Technology almost always in some way relies on technology, rather than other fields of study where "physical objects/technology" aren't as apparent (eg. linguistic, anthropological, and other social studies), constructing prototypes is often crucial to making your research even possible. In the case of the vibrotactical football game, I can't begin to see how you could have conducted user studies in any way other than having the subjects try out the prototype for the game. Trying to imagine getting decent results from, say, a questionnaire where the subjects answer questions such as "Where would you say the ball is located if your mobile phone vibrated repeatedly with a frequency of 0.05 ms?" falls short instantly.

This dependancy on prototyping does require certain skill sets, and also brings some challenges. For starters, you have the fact that the prototype is never the final product. Also, more often than not, the test subjects are (thankfully) aware of that they in fact are subjects of research, which could effect the results. It's like in quantum physics - you change the outcome by merely observing

References:
[1] - Fernaeus, Y. & Jacobsson, M. (2009). Comics, Robots, Fashion and Programming: outlining the concept of actDresses. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction. New York: ACM. [2] - Réhman, S., Sun, J., Liu, L., & Li, H. (2008). Turn Your Mobile Into the Ball: Rendering Live Football Game Using VibrationIEEE Transactions on Multimedia, 10(6), 1022-1033[3] - DJ Roomba Facebook page - https://www.facebook.com/pages/DJ-Roomba/293449892037 (author unknown, not published in a major journal)


======



DJ Roomba "tearin' it up" by playing Snoop Dogg ft The Dream - Gangsta Luv.

lördag 30 november 2013

Week #3 - Research and Theory (post-reflection)

During this week, we've delved further into Research and theory, reading about different aspects of what constitutes theory, and what does not. The texts associated with this week's theme was "What Theory is Not" by Robert Sutton and Barry Staw, and "The Nature of Theory in Information Systems" by Shirley Gregor, as well as the text "Influence of Social Media Use on Discussion Network Heterogeneity and Civic Engagement: The Moderating Role of Personality Traits" by Yonghwan Kim, Shih-Hsien Hsu & Homero Gil de Zúñiga, from Journal of Communication (IF: 2.011). 

During Wednesday's seminar, we had some fruitful discussions about the nature of theory, and some important clarifications were made. There was some confusion in the wiki in the section with examples of theories, where "field of research" was easily mistaken for "research theory". I also learned that the text I'd chosen for this week, "Influence of Social Media Use on Discussion Network Heterogeneity and Civic Engagement: The Moderating Role of Personality Traits" could be filed under the "Digital politics theory", which states that "Using the internet is a positive predictor for all forms of political participation for young people. The authors did some predictions about traditional and digital participation. Media can form our views and our perception of politics. Internet use does not affect all groups in society similarly, rather it depends on a complex combination of personal and social characteristics, and the specific content and context of the medium.This was indeed the case with my text, where the authors saw a link between social media usage and civic interaction - claiming that high social media usage of introverts could be related to a heterogeneity within ones' network. 

Furthermore, a lot of the discussion was focused on the validity of theories, and when/if a theory could ascend into a higher state - a fact. There was a general consensus that the phrase "When a theory is tested and accepted by a majority of experts in that field, it can be regarded as true." was erroneous, and should be replaced, due to the fact that - if we are to don a Cartesian doubt - (almost) nothing can be regarded as true. Therefore, a more humble version was presented: "When a theory is tested and accepted by a majority of experts in that field, it can be regarded as tested and accepted by a majority of experts in that field."

Week #4 - Quantitative research (pre-reflection)

Select a media technology research paper that you argue is using quantitative methods in a good way. The paper should be of high qualityl, with an “impact factor” of 1.0 or above. The following are examples of questions to discuss in your blog posting:
  1. Which quantitative method or methods are used in the paper? Which are the benefits and limitations of using these methods?
  2. What did you learn about quantitative methods from reading the paper?
  3. Which are the main methodological problems of the study? How could the use of the quantitative method or methods have been improved?
The study I’ve selected is from “Journal of Computer-Mediated Communication”, and is conducted by Nicole B. Ellison, Charles Steinfield, and Cliff Lampe, from Michigan University. It’s called “The Benefits of Facebook ‘‘Friends:’’ Social Capital and College Students’ Use of Online Social Network Sites”, and examines the use of Facebook and the formation and maintenance of social capital. The study uses students from Michigan State University (MSU) as it’s target group, and relies solemnly on answers from a questionnaire. They bring forth four hypotheses based on previous studies, and try to prove or falsify these using a quantitative questionnaire sent out to 800 “random” MSU students, where 35.8% of these (N = 286) answered. The questionnaire is anonymous, but data about respondent’s in the following categories: gender, age, ethnicity, income, year in school, home residence, local residence, member of fraternity/sorority, hours of internet usage per day, and “Facebook member”.
For the questions in the questionnaire which try to extract some qualitative answers, a Likert scale (A Likert scale is - in general - you answer a statement using a scale ranging from 1-5, 1 corresponding to “strongly disagree” and 5 “strongly agree”) is used. It is the most widely used scale for questionnaires, and has been around for more than 80 years. They use this scale along with other established scales, such as when they measured the respondent’s satisfaction with life at MSU, as follows:
“Satisfaction with Life at MSU The scale of satisfaction with life at MSU was adapted from the Satisfaction with Life Scale (SWLS) (Diener, Suh, & Oishi, 1997; Pavot & Diener, 1993), a five-item instrument designed to measure global cognitive judgments of one’s life. [...] The reliability test for this 5-point Likert scale showed a relatively high reliability”
The reliability test they’re referring to is Cronbach’s Alpha - a way of estimating the internal consistency and reliability of a statistical basis. They calculate alpha along with the answers they received from the questionnaire, resulting in alphas from 0.70-0.87. This is in the span of “Good”, according to Wikipedia (where α ≥ 0.9 is to be seen as ‘Excellent’ and 0.6 ≤ α < 0.7 is ‘Acceptable’). Furthermore, I found a piece on Cronbach’s Alpha from researchers Mohsen Tavakol and Reg Dennick:
“High quality tests are important to evaluate the reliability of data supplied in an examination or a research study. Alpha is a commonly employed index of test reliability. Alpha is affected by the test length and dimensionality. Alpha as an index of reliability should follow the assumptions of the essentially tau-equivalent approach. A low alpha appears if these assumptions are not meet. Alpha does not simply measure test homogeneity or unidimensionality as test reliability is a function of test length. A longer test increases the reliability of a test regardless of whether the test is homogenous or not. A high value of alpha (> 0.90) may suggest redundancies and show that the test length should be shortened (Tavakol, M, Dennick, R, 2011).“
So, does the test use quantitative methods in a good way? I don’t know. I mean - I’m honestly not enough well-read in research methodology to make that assessment. But if I were to try anyway, using common sense, gut feeling (which I, for the record, wouldn't use in research) and the limited theoretical background from the bachelor’s thesis I got two years ago, I would say that this study uses quantitative methods in a good way. This, mainly judging from i) the high number of respondents (N = 286, a 35,8% answer rate), and ii) the consistent use of well-established and proven scales (Likert scale, Cronbach’s Alpha, ) and reliability tests for the items in their questionnaire. “Our three measures of social capital—bridging, bonding, and maintained social capital—were created by adapting existing scales, with wording changed to reflect the context of the study, and creating new items designed to capture Internet-specific social capital (Quan-Haase and Wellman, 2004). The full set of social capital items was factor analyzed to ensure that the items reflected three distinct dimensions (see Table 5).” But as previously said - I’m just a layman, trying to evaluate this on a too short time span.
After reading “Physical activity, stress, and self-reported upper respiratory tract infection” by Bälter, et. al., I learned that men who stresses a lot benefit more than others from physical activity, in terms of reducing self-induced URTI. I also learned that there’s no good Swedish translation for URTI (please correct me if I’m wrong), but from what I could pick up, it refers to common cold, influenza, and similar infections.
So, for starters - some kind of Qualitative vs. Quantitative 101 states three advantages of quantitative questionnaires are that you can get a lot of answers in a short period of time, bias is reduced due to everyone getting the exact same questions, and there’s a possibility of more honest replies if the respondents’ are allowed to be anonymous. The first two advantages are very tangible in for example medical research, where it could literally be a matter of life and death to reduce the bias as much as possible, as well as getting an incredibly solid statistical base. The cons of quantitative research is that you lose depth that could be valuable or even necessary to draw valid conclusions from your research. The possibility of follow-up questions are reduced, and are eliminated completely for anonymous surveys. Also, your questionnaire is just as good as your questions are. Are they poorly formulated or if the answer scale ir poorly constructed, so will your research (likely) be.
References:
Ellison, N., Steinfield, B., & Lampe, C. The Benefits of Facebook ‘‘Friends:’’ Social Capital and College Students’ Use of Online Social Network Sites” (2007). Journal of Computer-Mediated Communication, 12, 1143-1168.
Tavakol, M. & Dennick, R. Making sense of Cronbach’s Alpha (2011). International Journal of Medical Education. 2:53-55.
Fondell, E., Lagerros, Y. T., Sundberg, C. J., Lekander, M., Bälter, O., Rothman, K., & Bälter, K. (2010). Physical activity, stress, and self-reported upper respiratory tract infection. Med Sci Sports Exerc, 43(2), 272-279.