http://edvardtmm13.blogspot.com/2013/12/theme-6-pre-reflection.html?showComment=1387275426682#c4144485256838712386
http://edvardtmm13.blogspot.com/2013/11/theme-2-post-reflection.html?showComment=1385135330450#c9024685746568595659
[...]
Teori & Metod för Medieteknik - Kursblogg
Friday, December 20, 2013
Tuesday, December 17, 2013
Theme 6: post-reflection
I realized I forgot to write anything about the last question for the pre-reflection, so here it goes:
I read a paper titled "Creative thinking in digital game design and development: a case study" that concerned itself with project in an american middle school where students got to take a class in game development.
The was a couple of things I reacted to. First off, the choice of which school to study didn't come off as especially theoretical. Maybe it was, but it wasn't thoroughly explained.
The data collected was purely qualitative: interviews with students, classroom observations and analysis of the games produced. Only 12 interviews were conducted and those students were chosen by the teachers according to loose criteria that they had gotten from the researcher.
There wasn't any substantial shift in perspective, or arguing from other points of view in the paper. It all felt very much at rest on top of several other previous studies and seemed, to me, to rather be an excercise in affirming a hypothesis in a rather weak way rather than putting it to a test with the possibility of falsifying it.
This is another case of a study where I believe in the hypothesis, but can't see how the design of the study and the methods chosen could actually make the hypothesis more likely to be true than what was already confirmed in previous studies. At first glance I thought this would be a good paper to discuss in relation to theory building, but after reading it thoroughly I don't feel that is the case.
For the case study to be a better basis for building theory upon, I think it would have needed a design that enabled the collection of some quantitative data and more qualitative data, on different aspects of the human-computer interaction. Also, the researcher seemed to have gone into the study with multiple hypotheses, all summed up pretty well in the sentence "game development in the curriculum is great because it is open-ended, fosters creativity and technological proficiency". There was never any room for any more specific hypotheses about correlations or explanations of how different factors affected the outcome. The sample-size did also seem very small for being able to predict anything beyond loosely affirming the already formed hypotheses from the beginning of the study.
As far as reflection goes, I enjoyed the seminar we had on Monday. The discussions were interesting, even though they didn't stray very far from the papers we had each read. All in all I feel like I have gotten more comfortable reading papers in an objective way, and having gained experience in reading papers, also gotten more comfortable in analysing research methodology and study design as separate from content.
I read a paper titled "Creative thinking in digital game design and development: a case study" that concerned itself with project in an american middle school where students got to take a class in game development.
The was a couple of things I reacted to. First off, the choice of which school to study didn't come off as especially theoretical. Maybe it was, but it wasn't thoroughly explained.
The data collected was purely qualitative: interviews with students, classroom observations and analysis of the games produced. Only 12 interviews were conducted and those students were chosen by the teachers according to loose criteria that they had gotten from the researcher.
There wasn't any substantial shift in perspective, or arguing from other points of view in the paper. It all felt very much at rest on top of several other previous studies and seemed, to me, to rather be an excercise in affirming a hypothesis in a rather weak way rather than putting it to a test with the possibility of falsifying it.
This is another case of a study where I believe in the hypothesis, but can't see how the design of the study and the methods chosen could actually make the hypothesis more likely to be true than what was already confirmed in previous studies. At first glance I thought this would be a good paper to discuss in relation to theory building, but after reading it thoroughly I don't feel that is the case.
For the case study to be a better basis for building theory upon, I think it would have needed a design that enabled the collection of some quantitative data and more qualitative data, on different aspects of the human-computer interaction. Also, the researcher seemed to have gone into the study with multiple hypotheses, all summed up pretty well in the sentence "game development in the curriculum is great because it is open-ended, fosters creativity and technological proficiency". There was never any room for any more specific hypotheses about correlations or explanations of how different factors affected the outcome. The sample-size did also seem very small for being able to predict anything beyond loosely affirming the already formed hypotheses from the beginning of the study.
As far as reflection goes, I enjoyed the seminar we had on Monday. The discussions were interesting, even though they didn't stray very far from the papers we had each read. All in all I feel like I have gotten more comfortable reading papers in an objective way, and having gained experience in reading papers, also gotten more comfortable in analysing research methodology and study design as separate from content.
Friday, December 13, 2013
Theme 6; pre-reflection
This week I chose
another paper from the journal of communication (I'm really starting to enjoy
this journal) titled "Making Sense of the News in a Hybrid Regime: How
Young Russians Decode State TV and an Oppositional Blog" by Florian
Toepfl.
Toepfl investigates how
young, well educated, urban russians decode different news items via the method
of semi-structured in-depth interviews. (sample size of 20.) He considers the
research to be exploratory and filling a gap that other, similar, research
endeavours has left, according to him.
When selecting
participants for the study he first selects according to criterion (educated
and urban) and then in a second stage for maximum variance in his sample
(political worldview, age, gender, course of study).
Using interviews as
data for research can be risky when using such a small sample-size, because it
might not be enough data to get any sort of statistical significance. In this
respect, Toepfl is lucky, or possibly insightful when designing the study.
Since he aspires to do exploratory research, his results – showing some clear
trends despite the sample size being small – does a good job of suggesting
areas that might be interesting for further research. (He suggests both
quantitative and qualitative approaches here.)
I thought it was
interesting to read about the study design, and how Toepfl – at least according
to me – managed to get a sample of people that shared all of the criteria he
had set, but still were all over the spectrum when it came to the answers he
got in the interviews. This is of course a balance act, carefully deciding on
how to sample respondents without compromising intellectual honesty and
objectivity. I think he did very well in this regard.
As pointed out in the
paper, one major problem is that, since Russia's media landscape is so
fragmented, the answers he got from respondents about their political world
views were not at all good indicators to which beliefs they actually held in
matter of fact issues. Toepfl explains this as showing how fragmented media
environment codifies ideologies and changes the significance of concepts like
"democratic", "liberal", "conservative",
depending on an individuals media habits.
Case study:
What is meant by <case study> is research into some phenomena in their context. Studying something as a part of a context is a good way of explaining and understanding how things happen in real life, because things are always a part of a context in one way or another.
What is meant by <case study> is research into some phenomena in their context. Studying something as a part of a context is a good way of explaining and understanding how things happen in real life, because things are always a part of a context in one way or another.
Wednesday, December 11, 2013
Theme 5: post-reflection
Due to problems with
the schedule, I could unfortunately only attend Haibo's lecture this week, and
not Ylva's
I must admit that I was
disappointed with Haibo's lecture. It felt more like a lecture on engineer
entrepreneurship than a lecture on design research. He spoke about how to
define problems and how important it is to be able to go back to a problems
definition and look at it with fresh eyes; still, it felt like the definition
of the concept <problem> he was using during the lecture was a very fuzzy
one. This comes back to what I was talking about in the pre-reflection. I'm
lacking a wider perspective in the material and the lecture, not one solely
interested in the development of consumer products. I think that's limiting the
discussion and promoting narrow ideas about problems and design, because the
artefacts that are being design serve a need that itself needs to be created or
is highly artificial. I'm not saying that's inherently bad, I'm just saying
it's only one side of the coin.
I'd like to discuss
this more, but I feel I need to think it over more before I do, so I'll answer
some questions instead.
How can media technologies be
evaluated?/What role will prototypes play in research?
Media technologies can of course be
evaluated in a lot of different ways, but when faced with evaluating a not yet
fully implemented technology, constructing a prototype to carry out tests with
is a good option. There are a lot of factors that are very hard to conceive of
on a theoretical level, which is a good argument for putting things into
practise as quickly as possible to spot issues as early in the design process
as possible.
Design research is a balance act between
quantifiable and non-quantifiable goals, which is why feedback on a prototype
that represents as many of the features of the finished artefact as possible is
invaluable.
Why could it be necessary to develop a proof
of concept prototype?
It could be necessary if you have to prove
that your concept works, either to yourself, your collaborators, friends or an
employer. It is a good way to communicate the essence of your idea.
What are characteristics and limitations of
prototypes?
That's almost impossible to say. A prototype
can be everything from a sheet of paper to an almost finished physical artefact
or piece of software. I guess the most unifying characteristic of prototypes is
that they are developed with the purpose of testing and for use in a
laboratory-environment, whatever that might be.
Wednesday, December 4, 2013
Theme 5: pre-reflection
We read two papers in preparation for next
weeks lectures. They were both interesting and, I think, showed in a good way
how design research differs a lot from more analytical, explanative,
predictive, et.c., research. That said, I don’t feel that there’s a lot to
reflect on before the lectures, since I don’t have any idea of the scope of
this theme. I’m going to try to write what has come to my mind without just
summarizing the articles’ content. (which would be text, but not so much of a
reflection.)
Both papers had interesting discussions
regarding human - comupter interaction, especially when it came to the nature
of human users in the context of the application. Li’s paper dealt with how
human tactility restricts representation with vibrations, and it was apparent
that there wasn’t as simple as taking into account how our sense of touch works
and then coming up with the best solution. The nature of our sense of touch
offered a set of premisses for the design problem, but the ways to come to
terms with those could be numerous. Respectively, Farnaeus’s paper had an
interesting discussion of how to utilize peoples semiotic knowledge from the areas
of comics and fashion when designing new ways to interact with, or program,
robots. Here, I thought, that the most interesting part was how familiarity
played a big part. Since the mobile football application as well as the robot
applications came off as consumer products I can understand how it’s relevant
to design with the thought in mind that users should want to use the product
without having to get an introduction to it, or practise it beforehand. Still,
the football-mobile paper did show that some training was required before you
could actually use the prototype effectively. I think it would be interesting
to include a paper on designing something that people need to use as well, in
contrast to something that people should want to use. The weighing of the
different goals would shift if the application was one that aimed at
facilitating a neccessary task and doing it effectively, while still being easy
enough to learn for the intended user-base.
Also the approach of using familiarity is
interesting in another way, because the robot-paper uses familiarity with other
concepts, comics and fashion, to design physical programming applications for
robots. This makes me think about how novel ideas for applications can have a
hard time getting accepted, not because they’re not great ideas, but because
they’re unfamiliar and hard to relate to as well as learn. The borrowing of
design characteristics from other, unrelated, areas that people are expected to
have knowledge about is a way to bring novelty to a design sphere. But, having
said that, I think that happens everywhere in technology these days, and – at
least when done on a superficial level – it can make me feel a bit sad and
unispired with just cross-pollinating ideas to make people engage in old
behaviours dressed up in other old behaviours clothes to make it feel novel. On
closer thought, I think that mainly goes for entertainment applications. I don’t
know, I’ll have to reflect more on this I guess…
Theme 4: post-reflection
I didn’t feel like I got as much out of this
weeks studies as I did from the previous weeks’. We got to read examples of
papers written on quantitative studies, but I missed a thorough theoretical
explanation for what makes a method essentially quantitative versus
qualitative. Since it’s presented as kind of a dichotomy, it shouldn’t be
impossible to draw up some dividing lines that you can refer to when thinking
about instances of research methods being used.
The theme for this week did lead to some
interesting discussions though; I greatly appreciated the seminar that Olle
Bälter held. Also, it got me into interesting discussions outside of school,
with a friend who studies statistics, about quantitative/qualitative methods
versus quantitative/qualitative data, and how they combine and possibly don’t
combine. It’s easy to get comfortable and blind to the bigger picture when
handling data.
After having read the paper by Bälter et. al.,
the seminar with Olle turned out to be the most interesting part of this theme
by far. I picked up many new ideas about surveys, how people behave and how to
handle information. It was a refershing experience in an area that otherwise
feels a bit dogmatic to have someone point out a lot of good methods that
aren’t actually being used on a wide scale. Overall I think I’m left with the
insight that I personally find it much more interesting to think about how to
design a good study in a specific scenario than to actually carry it out.
Friday, November 29, 2013
Theme 4: pre-reflection
I
was looking for an article in the Journal of Communications, since my last read
from that journal proved so interesting. I found a promising candidate in
”Cynics all around? The impact of Election News on Political Cynicism in Comparative
Perspective.” (”N=48,872? That seems like a serious quantity. Let’s have a
read.”)
The paper investigates
the impact of strategy framing in campaign news on levels of political
cynicism. Strategy framing is described as news-stories focusing more on the
game characteristics of an ongoing political campaign than the substantive
issues. For instance, using metaphors from sports, games or war, to explain the
actions of parties or candidates in terms of them trying to advance their
position or increase their chances of gaining influence, rather than reporting
on the actual political issues being discussed.
Using a very large sample
size, the study spanned across 21 different EU countries at the time of the EP
election in 2009. The method employed was “a multimethod research design
including a content analysis and a two-wave panel survey was employed, first,
to investigate how the news media in the different EU member states have
covered the campaign, and second, to assess the impact of such coverage on the
decision of voters to turn out to vote.”
The first part of that
multimethod is the content analysis. In that they used a large sample of news
stories from all the 21 countries involved (N=48,872) and coded them based on
strategic framing. The second part was a two-wave survey, conducted three weeks
before and immediately after the election day in respective country.
The authors’ discussion
about their data and methods is exhaustive and not very well suited for summary
here, but after having read through it and been prompted to revisit the locked
compartment in my mind dedicated to mathematical statistic I have substantial
confidence in the dependability of the study’s results.
One thing worth commenting
on is how the data collected in the surveys as well as the data coded in the
content analysis is very much qualitative data, but it is then used in a
quantitative way; that makes it important to keep in mind what can be said and
what can’t be said about the results. When working with qualitative data in that
way, you can use quantitative methods for analyzing trends, but you have to
keep in mind that the data-points cannot be considered equidistant. For
instance the 7-grade scales the survey participants are asked to rate their
agreement with certain propositions on; the data is qualitative, and as such,
the scale is in some way arbitrary.
The other paper, Bälter et.
al., also shows a very high degree of methodological awareness. It’s
interesting to read a paper like this, because a high degree of intellectual
honesty shows though in the discussion about what the data could be said to
implicate.
What I’ve been thinking
mostly about after this weeks reading is how to think about qualitative and
quantitative both in terms of methods and in terms of data, and how the two can
easily be confused. Also how the two types of methods do a good job of
complementing each other, provided you have a good understanding of how they
work.
Quantitative methods enable
the use of a much larger sample size, because the interesting variables are quantified
and easily handled when you compare them to the variables looked at in a
qualitative study. In a qualitative method, the focus lies on interpretation
which means you may not even know about all the variables you are interested in
going into the study.
Subscribe to:
Posts (Atom)