Wednesday, October 31, 2012

Update on Time-Shifting's Impact

A while ago I noted that a lot of this Fall seasons premieres benefited heavily from time-shifted viewing.  More recent ratings numbers suggest that the impact is not limited to sampling new series.  Many of the most popular TV series are continuing to see 40-50% of their audiences time-shifting.  Looking at the Nielsen L7 (live viewing plus time-shifted viewing within a week) ratings for the key 18-49 demographic.
Thus far in 2012, the top 10 broadcast shows have been getting 45.6% of its total audience (looking at its live premiere episode broadcast and seven days that followed) from time-shifted viewing. This compares to 41.6% in 2011.
  Shows pulling more than half their viewing from time-shifting include Big Bang Theory, Modern Family, Revolution, and New Girl (tops with 59% time-shifted). The rest of the top 10 series had time-shifting of 40% or higher, with the exception of Elementary (which narrowly missed at 39%).  Analysts note that the median age of viewers drops when time-shifting viewing is included, suggesting that younger folks make heavier use of their DVRs.  Penetration of DVR devices/services in US TVHHs continues to expand, passing 45% this summer, but cannot account for all of the increase in time-shifting on its own.  TV viewing habits seem to be changing, particularly among younger viewers.

Source -  More Broadcast Hits Seen Through Time-Shifting, Media Daily News

Disney goes gaming

Its not quite the blockbuster of the Lucasfilm deal, but Chris Kohler has a good piece in Wired about Disney's renewed interest in gaming and its impact on the media giant's other operations.  He notes that you're starting to see videogame characters joining the iconic pantheon of classic Disney characters.
  Of course, Disney's involvement in videogaming isn't exactly new.  Disney began licensing some of its characters for use in videogames some thirty years ago, and had established its own in-house gaming unit in 1988.  However, as Kohler notes, those efforts were aimed at bringing existing characters into the gaming market rather than creating new characters.  Even 2010's hit Epic Mickey brings back Oswald the Lucky Rabbit, Mickey's precursor.  Disney lost the rights to Oswald quite early, which led to his creation of Mickey Mouse and contributing to Disney's famous (or infamous) emphasis - bordering on obsession - on keeping and maintaining IP rights.
  Enter Swampy, an alligator and leading character in Disney's successful smartphone game app Where's My Water.


In addition to the success of the game app, Swampy's entered Disney's merchandising efforts. 
“This is maybe the first time in Disney’s history where we have a character that was created solely for a videogame product that is now branching in other directions,” says Disney Interactive Media Group vice president Bill Roper.
Roper hopes that the unit's Originals group can produce new worlds and characters that can tradition into other markets - shows for the Disney Channel, animated features, and more -
“There’s been discussions about, can we have Swampy be a walkaround character somewhere? For example, in Orlando in the water parks?”
  It doesn't hurt that Disney's next big animation release, Wreck-It Ralph, is a nostalgic paen to classic arcade games (trailer).  I'd look for a lot more gaming characters and worlds cross over to other markets.

Source - How Videogames are Changing Disney, Wired.com

Disney's Latest Big Deal

Of course, you've probably already heard about is Disney's announcement yesterday of its deal to purchase Lucasfilm Ltd. for j$4.05 billion.  About half of the purchase price will be paid in cash, half in Disney stock - and the stock component will make George Lucas, sole owner of Lucasfilm, the second largest shareholder in Disney. 
“For the past 35 years, one of my greatest pleasures has been to see Star Wars passed from one generation to the next,” Lucas, the sole shareholder of Lucasfilm, said in a press release announcing the acquisition. “It’s now time for me to pass Star Wars on to a new generation of filmmakers.”
“Lucasfilm reflects the extraordinary passion, vision and storytelling of its founder, George Lucas,” Disney chairman and CEO Bob Iger said in a statement. “This transaction combines a world-class portfolio of content including Star Wars, one of the greatest family entertainment franchises of all time, with Disney’s unique and unparalleled creativity across multiple platforms, businesses and markets to generate sustained growth and drive significant long-term value.”
   And speaking of the Star Wars franchise, Disney announced that Episode 7 of Star Wars is forthcoming (scheduled for release in 2015), with other Star Wars franchise films to follow every 2-3 years.
  The announcement was only the most recent of a number of business moves that should help cement Disney's pre-eminance in a range of creative content-based industries - the acquisition of Capital Cities/ABC through merger in 1996 (which included ESPN), comic book publisher Marvel in 2009, producers of a number of children's TV programs (Muppets, DIC, Sabane (Power Rangers)), and Pixar in 2006.  The move seems to cement Disney's preeminence in the superhero and Sci-Fi/Fantasy market.

  While the decision of George Lucas to sell Lucasfilm and withdraw from the business came as a surprise, the sale to Disney shouldn't have been.  Both Lucas and Disney are considered to be among the smartest media business operators who have consistently taken a long-term approach to growing their business rather than focusing on short-term profit maximization.  Both also shared an awareness that good creative content was exploitable beyond the initial media product release.
  Disney in particular has had a long tradition of exploitation of creative intellectual properties, epitomized by Disneyland (a theme park incorporating Disney characters), the first movie studio to use television to further promote and exploit its movies and theme park (starting with the Disneyland series in 1954), long term recognition and exploitation of licensing and merchandizing its creations, and its pioneering distribution strategy for its animated movies. Disney recognized quite early that the primary audience for many of its animated features were children, and that that market had some distinctive characteristics - mainly that children grow up, while others are born to replace them. Disney recognized that he could exploit that feature by re-releasing animated features every 7-10 years to a new audience; and until the rise of recordable media that was Disney's film distribution strategy.
  George Lucas was also known for his innovative business practices as well as his creative acumen.  Lucas' first big success came from the film American Graffiti, both as a hit film and as an innovative financial arrangement. Rather than take a big salary and/or a percentage of profits - the industry norm - Lucas got a deal that provided him less upfront money, but a smaller percentage of the film's gross revenues.  In essence, Lucas bet on his own success, and American Graffiti, produced at a cost of $775,000, went on to earn more than $200 million in box office and home video sales alone.  Lucas' earnings allowed him to create Lucasfilm in 1971, special effects powerhouse Industrial Light & Magic, and largely self-finance Star Wars; and once again Lucas struck an innovative distribution deal with Fox - once again Lucas took a smaller salary in return for keeping licensing and merchandizing rights for the Star Wars creative franchise.  Star Wars became the highest grossing film in the industry's history (until surpassed by E.T. five years later), with global earnings of more than $775 million to date.  Those earnings allowed Lucas to self-finance the rest of the Star Wars and Indiana Jones films, while insisting on retaining licensing and merchandising rights. Lucas quipped that Disney's long history of protecting, nurturing, and - yes - exploiting, of its creative content and intellectual property meant that he could trust them to take good care of his signature creative franchises.
"I really wanted to put the company somewhere in a larger entity which could protect it," (Lucas)  said.

In an interview he gave to fan magazine Empire earlier this year, Lucas had indicated he wanted to move away from the corporate side.

"I'm moving away from the company, I'm moving away from all my businesses, I'm finishing all my obligations and I'm going to retire to my garage with my saw and hammer and build hobby movies.
"I've always wanted to make movies that were more experimental in nature, and not have to worry about them showing in movie theaters."
  There's a couple of lessons here for creative content producers and media outlets in the digital age.  Both Disney and Lucas recognized that the value of creative content is not limited to its initial production and release, but can be translated into value in other markets; both were adept at innovative exploitation of that value; and both recognized the importance of keeping and protecting intellectual property rights.  All key lessons for content producers in a digital media environment.

Sources - Disney to buy "Star Wars" producer for $4.05 billion, Reuters
Disney to Buy Lucasfilm for $4.05 Billion; New 'Star Wars' Movie Set for 2015The Hollywood Reporter
Disney Buys Lucasfilm for $4B, Targets Star Wars: Episode 7 for 2015Wired.com

Monday, October 29, 2012

Funny, Yet Terribly Sad

After the 2008 US Presidential election, a number of senior editors and executives at major U.S. news organizations admitted that their coverage of Barack Obama was "soft": that there was little of the critical investigative reporting that they had directed towards George Bush from 2000 on.  Some admitted (and a few bragged) that their campaign coverage was worth 5-10% swing in votes for Obama.

Have things changed this campaign?  Probably - but not necessarily in a positive way.  How many continuously weak economic reports can you honestly label as "unexpected"?  (For the AP, the count's 3+ years and ongoing - but perhaps that's economic ignorance and not partisanship at work).  But this was too funny and too sad not to share.
Both Pew and Gallup show that Americans are losing faith in mainstream news being able to report accurately, honestly, and fairly. The trend is consistent and long-term and threatens the viability of "news" in the traditional journalism sense.  But when most major traditional news organizations don't deign to even talk about a top-5 storyline among the American public, it does more than continue that trend, it fundamentally challenges the notion and theory of Agenda-Setting.

Source:  Cartoon: Media Monkeys on Bengazi, Independent Journal Review
h/t - Instapundit

A Big Week for Mobile - Hardware

The last week saw a number of events and stories on the mobile media front.

The last few weeks have seen a number of highly anticipated new tablet hardware releases, leading Laptop magazine to launch the "Tablet World Series."


   Apple announced the release of its highly-anticipated iPad mini, which seemed like a smaller iPad, with all the good (and not-so-good) that implies.  Tech people found it technologically impressive, but still hanging around the top end of the price market.  Apple also surprised the industry press a bit by also announcing a range of other product upgrades.
   While the iPad's main top-end competitor, the Samsung Galaxy Note, has been out a while, Samsung did recently upgrade to the Samsung Galaxy Note 10.1 tablet.  Samsung seems to have focused on focusing on the tablet as a business productivity tool.
   Microsoft followed the launch of its new "Surface" tablet and mobile-savvy Windows 8.  The Surface tablet has some innovative technology, and is a major shift in corporate strategy (developing its own computing hardware).  If the Surface does well in the tablet market, it could lead the way for further moves into hardware for what's essentially a software giant.
   A bit earlier, Google entered the hardware fray with the Nexus 7 tablet.  The Nexus 7 has been piling up accolades and Editor's Choice awards based on performance and low price (for a small 7") tablet, running the most current Android OS.
   Content-focused tablets from Barnes & Noble (the HD+) and Amazon (Kindle Fire HD) - both improvements over earlier e-readers and tablets, but still missing some features of what could be called "full-service" tablets.  Amazon's also innovating in the wireless broadband service arena, offering free Skype calling on all models, and offering a 4G data plan for $50/yr, (250 MB/month) for its top model.

   The tablet market's exploding with quality entrants offering a range of tablet options at a range of prices - and giving Apple's iPad some serious competition.  That should give potential buyers a range of viable options, and help hasten tablet adoption.  And continue the debate as to whether tablets or PCs will dominate the future of personal computing.

Sources -  With iPad Mini, Apple poised to shake up tablet market.  Again  c/Net
Tablet mania: The battle of the seven-inch tabletsWashington Post
The New Era of Personal Computing, PC Mag

HDTV Pre-eminent in Primetime

Nielsen's latest numbers project that three-quarters of US TV households now have HDTV sets - a gain of 14% in the last year.  More than 40% of US TVHH have and use multiple HDTV sets.  But it's not just ownership - 61% of all prime time viewing is done on HDTVs.
  Nielsen took a deeper look at its May numbers to take a deeper look at "true HDTV" viewing.  Nielsen defines that as using an HDTV set connected to a HD set-top box that receives HDTV channels and programming (what - no over-the-air reception of HD broadcasting local stations?).  Looking at 17 "HD" channels (5 English-language broadcast networks and 12 cable channels), they found that 29% of broadcast and 21% of cable channel prime-time viewing was "true HDTV" watching.

  This might seem low, but the Nielsen definition and sample are very narrow - so the numbers should be treated as minimum levels rather than actual amount of HDTV viewing.

Source -  Most Prime-Time Viewing on HDTV. Quality An IssueMediaDailyNews

Wednesday, October 24, 2012

Clark Kent (Superman) to quit Daily Planet

Concern over the current state of journalism and traditional news organizations has hit the comic books.
  The latest issue of Superman 13 (from DC Comics) has mild-mannered reporter Clark Kent (a.k.a. Superman) getting into a heated argument with his editor, Perry White, and Lois Lane, who's now a TV news producer, over the lack of serious news.
Kent is summoned to the office of Perry White, the Planet's publisher, and given a dressing down for his lack of stories.
When Kent complains that it has been a "slow news week" he is told, by Lois Lane, now a television producer, that it is a poor excuse, prompting him to begin a rant about the state of modern journalism which ends with him quitting the paper.
"Why am I the one sounding like a grizzled ink-stained wretch who believes news should be about – I don't know – news?" he asks.
His question prompts White to respond with a devastating critique of newspaper journalism: "Times are changing and print is a dying medium.
"I don't like it but the only hope we have of delivering any news at all is to give the people what they want to read and God help me if a front-page story about some reality star gets them to pick up a paper and maybe stumble on some real news " the publisher continues.
After a second heated discussion with Morgan Edge, the paper's owner, Kent quits.
The current writer says Kent's more likely to start a blog than look for a job at another of Metropolis's media outlets.
A spokesman for DC comics confirmed the departure: "This is not the first time in DC Comics history that Clark Kent has left the Planet, and this time the resignation reflects present-day issues – the balance of journalism vs. entertainment, the role of new media, the rise of the citizen journalist, etc."

Source -  Superman quits the Daily Planet - over the state of journalism,  the Telegraph

Nielsen Seeks Pre-emptive Measure

Nielsen and WPP's Group M are seeking to pre-empt the search for a viable way to measure online audiences - pushing for the industry to fall in behind Nielsen's "Online Campaign Ratings" and "Cross-Platform Campaign Ratings" as the "currency for online advertising buys."
  The industry's long felt a need for better and more consistent measures, with a cross-industry group exploring and evaluating various metrics being developed by research and tracking firms.  Certainly, advertisers would prefer having a widely used common metric - as it simplifies buying and placement decisions.  They'd like to see something as widely used as Nielsen's TV Ratings has been for that industry - although they'd also prefer something that was more reliable and valid.  Ideally, they'd like to see a metric that was based on something more than simple exposure - a measure of interest, interaction, engagement, impacts, etc.  And they would prefer something sooner than later.
  Nielsen's got a measure (although it remains focused on exposure), and has been successful at getting some large ad agencies and advertisers to use it.  But it doesn't meet all the desired criteria the industry's been talking about as useful, if not critical in a new online audience metric.  ComScore, a rival metrics firm that bases its measures on audience behaviors as reflected in data flow metrics, has been challenging Nielsen's metrics as not capturing "viewability" of online ads (in the sense of engagement and/or impact).  A spokesman for GroupM tried to be dismissive about that argument, arguing that "viewability" will be less important because the ad servers could track that.

  Here's the essence of what they're arguing about (I'm simplifying for illustrative purposes here) - Nielsen has a way to count how many people visit a page with an ad - say an online video ad.  ComScore has a way of not only knowing how many people visited the ad, but also how many watched it.  The GroupM dismissal is saying that the ad servers will know how many times they streamed the video ad, but not necessarily how much of it was watched.  (And doesn't indicate that they're not all using the same metrics, and that the data are proprietary and not likely to be available for a shared industry metric).
  The more critical underlying issue, though, is the complexity of the online environment, and groups and firms are seeking to make use of the Net for a wide variety of advertising, marketing, and public relations - and increasingly are developing and utilizing the messaging potential of the Net in new and innovative ways.  And it's not at all clear that all of them can be reliably and validly measured by a single common metric.  Just consider the recent battle in the TV market about what types of viewing gets counted - the traditional Nielsen ratings counted "live" viewing at home (as broadcast).  But then what about time-shifted viewing (DVRs) that now make up 20% or more of program viewing - or Internet streamed videos - or viewing on PCs, laptops, mobile devices?  If someone's simultaneously watching multiple programs (using multiple sets, PIP, tablets as second streams), how is that to be counted?  The TV ad market's been dealing with this for years.  But once an industry establishes a consensus standard, it becomes difficult to reach a consensus on a new standard, even if all parties agree the old one doesn't really work well anymore.

  Certainly, the online advertising market would like to reach a consensus standard - particularly as the marketing and advertising dollars designated for online continues its rapid rise.  And certainly, Nielsen would like to have their metric set as the new standard - it means lots of money for them coming on the heels of several TV ratings debacles in the last few years.  And certainly, ad agencies and advertisers would benefit from having a common "currency for online advertising buts."
  The problem is that Nielsen's proposed new standard doesn't appear to adequately measure the full range of online users exposure to, interest in, and reaction to, the full range of advertising and marketing messages that the Internet has the potential of embracing.  It's doubtful that any single metric would, but  there's no question that there are other potential metrics that might capture at least some of the more meaningful uses.  So here's my question for the industry -
Is the need for a consensus online advertising metric so urgent that the industry will settle for a "minimally acceptable" standard rather than work to improve and refine the measurement of online advertising audiences.  Particularly if history shows that relying on a consensus standard tends to delay and/or defer the development and acceptance of improved ways of measuring an expanded variety of audience behaviors.
Nielsen's a good company, for the most part, but I also think that there are a number of other interesting alternative metrics being developed and refined. Nielsen's kimping the gun here, and I hope the industry doesn't rush into accepting "viable" too quickly.



Source - Nielsen Chief: GroupM Pushing XCR As Cross-Platform Ratings Standard, Asserts It's A Fait Accompli,  OnliineMediaDaily


Pardon My Stats- Misreporting

One of my fears about the potentially dismal future of journalism is the rampant display of ignorance in reporting almost anything touching on science, research, polling, and statistics.  The source of today's rant is a  headline.
Survey: Obama Scores Higher As 'True Leader'
The problem starts with the first word: the research finding being reported isn't from a survey at all - it's from a series of focus groups.  Focus groups that aren't random or representative of the general population (for instance, there are equal numbers of Republicans and Democrats, but no independent voters at all.  Focus groups differ significantly from standard surveys, in that how they phrase questions are often inconsistent, and can be influenced highly by lead-in questions and responses - so there may be little consistency in the wording that participants are responding to.  Furthermore, people in focus groups are asked to give their responses publicly, so there's a strong chance of "socially desirable" or "bandwagon" responses (a bias towards giving an answer the moderator wants, or just following along with what others in the group say).
The most glaring problem, though, is that this group of respondents weren't randomly selected, and thus whatever answers they provide only pertain to themselves.  There is no scientific or statistical basis for generalizing this sample of 400 participants to the U.S. population.  Furthermore, we know from the statement of 200Rep/200Dem makeup that this sample isn't even remotely representative.    The lack of random sampling also means that there is no viable basis for standard statistical analysis - which researchers use to base judgements about statistically significant models or differences.
  Another hint that this really isn't a survey is that the researchers provide none of the background on methodology, or precise question wording, that the American Association of Public Opinion Researchers recommend as the minimal reporting necessary to understand and evaluate survey results.  Of these the most serious for political polling is the precise wording of questions, and the nature of lead-in questions - there can be as much as a 30-40% swing in survey results based on context and wording.

So forget any sound basis for these representing what "voters" think - what can you say about this sample's responses to some general inquiries.  Will either candidate "offer them a better quality of life."  (Let's forget for the moment the fact that neither is capable of personally doing so, and that quality of life is vague term covering a wide range of different potentialities).  Two thirds of that sample had no opinion - and the researchers report that among those with an opinion, "President Obama has the higher score by a "significant margin"".  First, there's no score here - just how many gave that response, and the researchers don't give either counts or percentages.  Assuming they've at least got the order right - that means that at least 66 (of 400) thought President Obama could give them a better life.  But did Obama outpoll Romney by 5 or 25?  We don't know, as they don't say.
  Then there's the "true leader" item. What makes a "true leader" - a decisive authoritarian, a strict follower of some creed (political philosophy, religion, or moral/ethical codes), or a true democrat that follows majority opinion?  The researchers indicate that the numbers (or scores) for Obama are again "significantly higher" than Romney's, although the numbers for Obama have been falling and the numbers for Romney are increasing.  But there is apparently no precise number or indicator of significance that the researchers are willing to share.
   If you want to get to what is "scientifically" observable results, here goes; the researchers asked a non-random, non-representative group of people some questions about the two main Presidential candidates.  A lot didn't really have any opinions or preferences, some liked Obama, some (a few less) liked Romney, and some are shifting their preferences from Obama to Romney.  From that, the researchers presume to offer interpretations than aren't supported by their methods and results.  Certainly, no claim about "voters" in a more general sense is demonstrably supported by the results as described.
  The other thing "research reports" like this tell me is that these aren't researchers I'd trust to know or validly describe  the real world.  They seem to be better at telling clients what the clients want to hear.

As for the reporter and story - they seem to accurately repackage the information they were provided - and the reporter does note that the researchers actually provide no real numbers,  and does place quote marks around one of the uses of "significant margin."  This might suggest some awareness that this "research" may be more spin that science.  But apparently not enough to question whether to pass along  - as generalizable survey research - vague results and unsupportable generalizations about what "voters" think.  It's easy, and makes for good headlines and conclusions, to talk about the opinions and attitudes of "voters" rather than of just this one sample of 400.  But that would be at a minimum sloppy, and at worst a clear display of the reporter's ignorance of the science of public opinion research.  And that yarms the reporters and the news organizations credibility and value as journalists.

Source -  Survey: Obama Scores Higher As 'True Leader',  Media Daily News

Sunday, October 21, 2012

Local Event: Ed Zotti on Campus

Ed Zotti: Research and Writing in the Age of Wikipedia 
Speaking event at UT's University Center at 7:00 pm on October 24
Ed Zotti is credited as being the “editor and confidant” of Cecil Adams,  the writer of the nationally syndicated column The Straight Dope and multiple books. The books, columns and website all claim to be “Fighting Ignorance Since 1973 (It’s taking longer than we thought)”, and provide answers to all sorts of absurd questions.

h/t Hannah Bailey for the info

Goin' Mobile: Newspapers, Magazines Up

New research reports from ComScore and GfK include some good news for newspaper and magazine companies.
  Tablet owners are increasingly using their devices to access content from newspapers and magazines.


Tablets used to read …
Newspapers
Magazines
daily

11.5%
9.7%
at least weekly
13.3%
1-3 times a month
14.6%
16.7%
once a month
37.1%
39.6%


Currently tablet use accounts for about 7% of all newspaper page views.
  Research is also showing strong revenue potential from tablet owners from both advertising and  subscriptions.  The two studies continue to show tablet owners who read newspapers and magazines on their devices are a valuable audience for advertising - skewing male, younger, with higher incomes.  Studies are also finding tablet advertising embedded in the newspaper or magazine content is read or noticed by tablet users, and that when those tablet ads are read or noticed, there's a high degree of reader engagement with the ad.

Source -  Mobile Rising: Newspapers, Magazine Readership, Ad Engagement UpMediaDailyNews

Saturday, October 20, 2012

Global Ad Revs Forecast - TV Up, Overall Down

Dublin-based research firm Research and Markets is offering its forecast of global advertising revenues.  It sees total global advertising revenues, currently estimated at 340 billion Euros annually, will experience steady declines for the next four years - largely as a result of continuing transformational changes in media markets.
  Despite the overall declines, the firm forecasts growth in TV ad markets.  The report suggests that in 2016, global TV advertising revenues will be 21% higher, pay-TV revenues will be 12% higher, and public financing & licensing fees will increase 7%.  Some of that growth will come from continued expansion and improvements in TV ownership - 9.4% growth in TV households (passing 1.5 billion globally), with the share of digital TV households growing to 77.6% (both by 2016).  The report also suggest that the source mix for TV will continue to change, with cable and terrestrial services loosing market share to satellite and IPTV services.

Source -  Forecast: TV Ad Revs Growing, Overall Revs To DipMediaDailyNews

Friday, October 19, 2012

Twitter and News and TV

In a recent interview, Twitter CEO Dick Costolo talked a bit about what he saw as Twitter's role in the expanding media marketplace.
  He suggested that the services most important social impact is disintermediation - in providing users with an avenue to information, news, and opinion that is not filtered through the dominant media lens.
(W)ith Twitter, people want to know what everyone else thinks and we're getting this inside-out, multi-perspective view of what's going on right now as it happens from everybody else that's watching the same thing we're watching. So I think the major transition for society has been one of from a single point of view in a filtered way to a multi-perspective, inside-out, unfiltered view of what people think of the event.
Costolo went on to say that he thought that Twitter was more of a complement to existing media than a substitute or replacement for it.  When asked whether he thought people used Twitter more as an echo chamber, Costolo suggested that his hope was that Twitter gave everyone a voice, and gave followers the potential to sample the full spectrum of opinions and perspectives - but it is the followers choice to use Twitter for enlightenment (seeking alternate perspectives) or self-affirmation (seeking support for one's own thoughts).
  As for advertising and revenues, Costolo didn't give figures, but did note that "the origin and future of Twitter is mobile," and the mobile advertising market is robust and growing.  As for the company's future, Costolo noted that
(Looking) down the road for Twitter, we would like to be a company -- a service -- that is used by billions of people around the world in every country in the world because we feel that the power of Twitter is that it brings people closer to each other, to their governments, to their heroes, etc. So that is absolutely our overriding goal for this service. I think the only thing standing in between that goal and where we are today is our own ability to execute against that vision.
  Some commentators took his use of the NBC Olympics coverage as an example of the complementarity nature of Twitter to suggest a focus on developing Twitter as a major force in Social TV (through the "Second Screen"). Dave Morgan's take, as outlined in a post on the Online Spin blog, is that the design of the Twitter service, and current usage patterns tied to big news events, shows that a real-time social Web service can add value to an important live news event being passionately followed by millions of citizen journalists and millions more readers, viewers, reviewers and analysts. "It gives me hope yet for the development of the kind of powerful digital Fourth Estate that many folks thought the Internet could support," Morgan quipped.
  Morgan rightfully cautions that that such use isn't necessarily the best or dominant aspect of Social TV and/or Second-Screen application - just an interesting and potentially useful one for news and journalism to note and try to take advantage of.

And I, like Morgan and Costolo (with less specificity), see "the intersection between media like television and web services will be ... wide and deep and diverse."  "Control" is probably not the best word to describe that relationship.

For more of Costolo's comments, check the interview transcript at the source below.

Source - Twitter CEO Dick Costolo on Jack Dorsey, ad revenue, going publicMarketplace Tech 
Twitter Clear Early Leader To Control TV's 'Second Screen',  Online Spin


Shift in Gen Y Primetime Viewing Habils

A new report by research firm GfK examines trends in primetime viewing habits over time in the US, and finds that Gen Y consumers are developing very different viewing habits than their parents.  Their basic conclusions are that diffusion of media devices continues to expand, and that the amount of TV use in primetime hasn't changed much - but how people use TV during that time has changed, and the younger generation is driving that change.
  For the GenY age group (13-32), the report indicated the following viewing patterns:
  •  Live broadcasts - 57%  (down from 82% in 2008)
  •  Recorded programs - 28%  (up from 15% in 2008)
  •  Streaming video - 12%  (none reported in 2008)
  •  Playing videogame - 13%  (12% in 2008)
In contrast, Gen X (aged 33-46) primetime viewing only had 3% streaming and 4% playing videogames, while the proportion of recorded program viewing was about the same (26%).  (and yes, the numbers above do add to more than 100% as a result of multitasking)
But the habits of the youngest group are most noteworthy because they provide directional insight as to where consumer behavior is headed in the coming years. Streaming video on the TV doesn’t necessarily equate to viral videos or Web series. In many cases, viewers may be streaming TV shows via Netflix, or renting programming on iTunes or Amazon. Nevertheless, the increased usage underscores that broadband video is converging with the TV set for the younger generation.
The report also looked at "daily media time" in 2004, 2008, and 2012.  One not so surprising result is that the amount of time spent using media increased over the last four years, largely as a result of increases in time spent online, and the opportunities that mobile offers to use media.  In 2012, mobile accounted for about 7% of daily media time, online about 30%, radio about 20%, and TV 40%.  Use of printed media (newspapers and magazines) has fallen to 3-4% of daily media time.
  The good news for advertisers was that the study showed little change in people's attitudes about TV advertising. The bad news for broadcast networks was that only 42% of respondents named a broadcast network (compared to a cable network) when asked to name the 3 channels they would turn to first when watching TV in primetime. GfK's conclusion:
Media companies should be thinking of primetime more as a concept - that time period during which people will always be looking to be entertained in the evening - than as a static slate of scheduled programs, and offer content/platform choices that can be utilized by all of today's audience niches...
(W)e believe long term success will come to those who embrace alternative viewing options while still serving the large audience for traditional, linear primetime programming.
 Sounds like some good advice.


Source - Streaming Video on TV Set Cutting into Primetime TV Viewing for Gen Y 12% of Time, VidBlog
Primetime TV 2004-2012: Among persons 18-49, GfK Media white paper (request download)

Cool: Ghosts of WWII (Images)

UK's Daily Mail has a fascinating piece on the work of a Dutch historian who -after finding a collection of WWII negatives in a flea market - matched "atmospheric" photos from WWII with new photos of the same locations, and combined them.


(Historian Jo Teeuwisse) believes that making war scenes familiar by linking them to somewhere we recognise heightens their impact.
'I knew what happened there, but knowing the exact spot of some detail will etch it into your visual memory,' she said.


Be sure to check them out.

Source - Ghosts of War, MailOnline
Ghosts of History, flickr album
Ghosts of History, Facebook page

Thursday, October 18, 2012

Newsweek to abandon print

Many hoped that Newsweek's fire sale (price - One dollar) two years ago would rejuvenate the troubled news magazine.  Reaction when Tina Brown was named Editor-In-Chief was more mixed.  But three months ago the family of the late Sidney Harman (Newsweek's purchaser) pulled their financial support. At that point most analysts felt that the end was near for what had been a heavily subsidized print edition.
  Then came the announcement yesterday - not in the print magazine, or its online edition, but on The Daily Beast website - that Newsweek would stop publishing a print edition with the Dec. 31 2012 issue. Newsweek execs said they would rebrand the online remnants as Newsweek Global and continue to rely on subscriptions from online readers for revenues.
The new publication will be "a single, worldwide edition targeted for a highly mobile, opinion-leading audience who want to learn about world events in a sophisticated context," the statement said.

"This decision is not about the quality of the brand or the journalism--that is as powerful as ever," (Tina Brown) wrote in a statement on the Daily Beast website. "It is about the challenging economics of print publishing and distribution."

  Brown is somewhat correct in terms of the economics of print - its certainly more costly than online distribution.  But many print magazines are successful.  The more fundamental problem is on the revenue side - not enough people think the print edition and its content is worth buying to cover the editorial and print distribution costs.  While Newsweek was historically a proud and valuable brand - it's continuing decline and failure in the magazine marketplace suggests that the quality of the brand and its journalism is a major part of the outlet's decline.

Source - Newsweek to End Print EditionWall Street Journal

Measurement Issues: Online, Social, & RTB/Exchanges

This seems to be a good week to write about metrics - measuring audiences and/or media use.  I posted yesterday about Nielsen's plans to use audio code readers and STB (Set Top Box) data from cable and satellite services to try to get a fuller sense of today's TV viewing behaviors, and a few days ago about Billboard revising how it measures a song's popularity in their Top 100 song charts.  In this post, I'll try to highlight some other news and discussion of metrics.
  Measuring media use has always been a bit problematic - but in the old world of media silos, media and advertisers were able to eventually agree on standard measures.  Audited circulation numbers for newspapers and other print media, ratings and shares for broadcasters - none were perfect, but with a bit of back and forth between media and advertisers, a general consensus was reached on specific formulas for calculating media use that were considered valid and reliable enough to be useful.  Reaching consensus was helped by the limited number of options available for consumers to access the content.
  The rise of digital networks and growth of media outlets disrupted things - increased competition fragmented audiences, making accuracy in measures more critical.  To illustrate, in the 1960s programs on the big three broadcast networks would see primetime programs getting ratings in the 20s - a typical night might see shows with 25, 22, and 19 ratings competing and those 2-5 point differences were well outside standard measurement error - while this fall's network premiere week saw the top network primetime programs earn ratings in the 2.5 to 3.1 range (and 1.9 ratings considered a flop) with the result that critical decisions on buying advertising and/or cancelling programs are based on differences of a single ratings point or even tenths of a ratings point.  While improved methods have reduced sampling and measurement error a bit, a lot of ratings based decisions today are based on differences that are statistically insignificant.
  The growth in media outlets and convergence of content markets created another problem.  With much content being accessible through multiple outlets, and with users also having the options of time-shifting or place-shifting, two critical issue for metrics emerges: which of these alternative uses should count, and can you develop measures that are comparable across the range of options?
  Consider a relatively simple case - a daily local newspaper that also puts stories online.  You have the ABC circulation numbers of how many papers were sold (and presumably read) that day.  And metrics abound on the Internet, so it's easy to count how many times the online paper's site was accessed; you could also count how many "unique" site visits have occurred.  But online new use is different than print - with print you can argue that subscribers and purchasers at least scan a large part of the paper.  But the online news reader may look only at a couple of specific pieces of information - weather forecast, a favorite columnists latest piece, etc.  Is accessing something on a newspaper website the equivalent of a sold copy?  And if you approach metrics from the advertiser's perspective, they want to know how many people saw their ad.  Should web use count if their ad isn't on the website? 
  Another issue is the web itself - which on one level can easily measure certain things - page downloads, unique visitors, click-throughs, posts, ad nauseum.  But none directly get at the real question - what the web user did with it.
  So you can understand that there's a lot of questions these days about who and what should be measured - and a lot of push from both media and advertisers/marketers to develop ways of measuring audience consumption, use, and impact of various types of content.

The Nielsen and Billboard posts talk about ways of expanding metrics to include new forms of consumption - Billboard adding online sales and plays on streaming services to the traditional radio airplays; Nielsen's audio code reader providing a mechanism to capture second screen viewing (among other things).  But there's still a lot to work on with respect to online media use metrics

  Gail Belsky has a piece on OMMA (a news blog covering online marketing and advertising) that considers a rising issue in the online video market - the lack of standard metrics.  Online video use is booming, as is online video advertising - but many advertisers are hesitent to enter the market without reliable metrics and standards that let them directly compare online video viewing with traditional TV viewing.
(Online video viewing) still lacks uniform ROI metrics to compare it to TV — a concern even for pioneer online advertisers. “At the end of the day, it’s really about measurement,” says Greg Milner, director of global interactive marketing at computer maker Lenovo. “Is your ad working or not?”
Surveys of advertising and marketing executives suggest most feel that developing consensual and reliable metrics are important for future development of the online advertising market.  For now, some potential advertisers are satisfied with online advertising's ability to reach highly targeted market segments, and the ability of online video to create a "viewing experience" and promote engagement. For the mobile advertising market, there is also concern about whether the small screen can deliver the same impact. But for many others, the lack of a uniform metric comparable to those in traditional media makes contribute to uncertainty as to the relative value and impact of online advertising - even if they're convinced that they need to be in the online market.  So for now, many find themselves testing the waters with small buys.
“It’s an environment where viewership is,” says Sacerdoti (CEO of video advertising network BrightRoll). “Advertisers should be present and testing that environment. There’s absolutely no doubt that if you’re a top 200 advertiser in tv, you should be testing mobile.”
  Meanwhile, over at GigaOM, Mark Ingram looks at metrics for social media.  More specifically, he notes that media outlets and the firms using advertising and marketing through media, are interested in how people got to that target site.  Social media seems to be driving a lot of traffic today, but it's not that easy to monitor or measure the impact of social media.  Alex Madrigal suggests that a lot of traffic and referrals prompted by social media posts come through means that most current web-analytics metrics toss into a catch-all category called "direct".  The IP header used to direct online traffic can include various bits of metadata, including a referral tag from the originating source of the message.
 There are circumstances, however, when there is no referrer data. You show up at our doorstep and we have no idea how you got here. The main situations in which this happens are email programs, instant messages, some mobile applications*, and whenever someone is moving from a secure site to a non-secure site.
 To illustrate his point, Madrigal used one metric from Chartbeat that broke the "direct" category into several subcategories, including one linked to social media use, and applied it to The Atlantic magazine's website (see piechart).
(Chartbeat) took visitors who showed up without referrer data and split them into two categories. The first was people who were going to a homepage (theatlantic.com) or a subject landing page (theatlantic.com/politics). The second were people going to any other page, that is to say, all of our articles. These people, they figured, were following some sort of link because no one actually types "http://www.theatlantic.com/technology/archive/2012/10/atlast-the-gargantuan-telescope-designed-to-find-life-on-other-planets/263409/." They started counting these people as what they call direct social.
Chartbeat's estimated that social traffic to media sites is significant - 17.5% of all referrals were social referrals (coming directly from a social media site, or were "dark social").  Only search engines generated more referrals (21.5%).  But as much as 70% of those social referrals were "dark", and missed by many web metrics services, in the sense that they were not identified as social referrals.  This dark social traffic comes from email, IM, chat apps, and a variety of other conduits, but are likely initiated by social media content.

A third post from John R. Osborn on the Online Video Insider blog also points to the importance of resolving measurement issues.  Osborn's focus is on the emergence of RTB/Ad Exchanges
 - RTB standing for Real Time Bidding - where advertising inventory is pooled from a wide array of media outlets, and auctioned off just prior to the advertising slot (in contrast with upfront ad buys that occur weeks or even months in advance).  The development of these exchanges offers significant opportunities for smaller media outlines (particularly those online) to tap into advertising and marketing revenue streams.  It also opens opportunities for advertisers or marketers to react quickly to opportunities and events, and to aggregate highly targeted audiences without having to seek out and negotiate with individual outlets.  The primary thrust of the post is that if these exchanges can capture some of existing advertising/marketing revenues, and develop added revenue from small firms and organizations who aren't in the traditional markets, the revenue potential could make ad-revenue business models viable for the smallest of online video outlets.
  The key to the long-term success of this advertising market segment (and thus its viability as a significant component of media outlet business plans), is developing a common basis for measuring ad exposure and establishing consensual price/performance measures that are comparable to other advertising formats and markets.  As Osborn notes,
Many players are focused on development of a common buying currency, (sometimes called “digital GRPs”) to help accelerate the use of RTB and video ad exchanges.
Furthermore, if that can be done, the ability to get near real-time metrics with online media and ads can provide RTB/Exchanges with a serious competitive advantage compared to upfront buys, by reducing uncertainty about actual audience size, demographic make-up, and current interests/focus.
Advertiser value will increase as constant, real-time feedback and adjustments get the message in front of the right audiences in the right context at the right time.
Development of these exchanges provides a number of opportunities for both online media outlets and content providers, and advertisers and others interested in reaching their targeted audiences with their own message.  Furthermore, the fact that these Exchanges promotes on-the-fly aggregation of both outlets and advertisers opens the way for many smaller-scale outlets and individuals on both the supply and demand side to get into, and benefit from, the rapidly growing online advertising market. 
  A recent study from Index Platform found that about half of advertisers and publishers are already testing RTB/Exchanges.  The study also suggested that their greatest concerns were essentially about metrics - buyers wanting transparency and reliable measures of audiences; publishers wanting to assure better that the ad value coming from RTB is comparable with other sources.  They also argued that developing a market-wide consensus on metrics, values, and practices would promote growth of this market segment.  While most thought RTB revenues would grow, there was an common underlying perception that developing commonly accepted metrics will be critical for this aspect of the market to fully develop.

Each of the posts raises and discusses some interesting issues about the inadequacy of current metrics for online content and outlets.  All are worth a closer read.

Sources -  Video Goes Godzilla,  OMMA blog
Dark social: Why measuring user engagement is even harder than you think, GigaOM
Dark Social: We Have the Whole History of the Web Wrong, the Atlantic
How RTB Video Exchanges Will Create a New T/V Business Model, OMMA blog
Real Time Bidding and Ad Exchanges: What works and what doesn't, report from Index Platform

Tuesday, October 16, 2012

A New Business Model for TV programs?

  In a piece originally published in The Hollywood Reporter magazine, Lacey Rose suggests that a new program production business model is emerging in cable.  My response - sort of, but not really that new.
  Historically there have been two basic business models for television programs.  In one, the broadcaster/network finances and owns the program - and bears both the risk of failure and the benefits of success.  In the other, program production and ownership is separate, and the broadcaster/network licenses the right to air the program.  In that case, the program owner bears the risk of failure, and reaps the rewards that come from success.  While the real world allows some intermixing of the models, and certainly the production of very expensive programs may require commitments from networks or sponsors, at heart there are the two fundamental models.
  In the U.S., a series of policy moves pushed the U.S. production model to the second stream - for decades broadcast networks were prohibited from any ownership of either the programs it aired, or their distribution rights beyond the initial network license agreements.  As demand for programming grew, these rules were loosened, and networks roared back into program ownership.  There's some economic downside to the network/studio ownership model, however.  First, these large firms tend to have higher production costs (union contracts and rules, and internecine competition for top talent, push costs up).  Second, as generalists, they tend to be followers of trends rather than being able to accurately predict shifts in audience preferences.  Third, with network ownership in particular, there is a tendency to emphasize short-term success in terms of revenue potential (that is, they focus on first-run licensing revenues).  Combine these, and firms become risk-adverse.
  So if you look at the big network program production model, you'll see a mix of high-cost entertainment programs (with a sizable number owned by the network), high cost sports programs obtained through licensing arrangements; low-cost news programs owned by the network, and low-cost "reality" programs (again, mostly licensed),  The networks don't fret too much about ownership of the latter three program categories, figuring that there is no significant secondary market for news, sports, and reality programs.  The production costs for entertainment programs are highly inflated (for a number of reasons), so require substantial secondary market revenues to even have a chance at recouping production costs.  Networks need these programs to remain viable and competitive, so there is a strong incentive to obtain ownership rights - first to guarantee supply, but also in the hope that secondary market revenues can offset the initial high costs of production to some degree.
  What Rose noticed, and writes about, is the growth of made-for-cable entertainment programs.  Cable audiences and revenues aren't large enough to support entertainment programming - not using the basic business model above, anyway.  So is there a new, different business model for Cable?  Not totally new in the traditional sense; what Rose notes is that the potential to reduce costs that digital brings, and the development of new marketing channels for video programs have combined to make both the producer/owner and network ownership models financially viable for the smaller scale cable markets.
  Rose looks at the success of program production house FXP (affiliated with the FX network).  The company started with a belief that producing sitcoms for cable networks could be viable if you could produce them cheaply, and keep ownership rights (so that you could benefits from licensing the program to the growing range of distribution outlets, merchandizing, and in some cases, games and movies).
"Back then, the big traditional studios didn't think there was any money to be made from cable comedy, so they ignored the space," says one executive. What (FXP figured out was that) Half-hours could make sense if you made them cheaply and -- importantly -- if you owned them. (Comedy tends to generate lower ratings during the first run but holds up better in repeats; so if you're simply licensing a series, you're paying more to get less and aren't able to take advantage of that ancillary revenue.)
New technologies meant programs could be made cheaply, if you can get talent inexpensively (and frankly, if you can produce in a non-union environment) - and talent could be enticed to take lower initial direct compensation; particularly if you could give them creative freedom and a piece of the downstream licensing rights.
  The growth in the number and value of secondary licensing markets, though, has meant that the networks are increasingly interested in program rights ownership.
"We've just figured out a way to make comedies less expensively than almost anybody can do it," notes Schrier (VP of FXP) (discussing) a template that has them producing half-hours for $400,000 to $700,000 an episode (a network sitcom can cost four times that).
It also helps that in addition to the traditional rerun and international licensing markets, substantial new markets have emerged in licensing content for home video (DVD), on-demand, streaming video, merchandizing, videogame, and even movie rights.
For instance, if FXP wholly owned the Sony TV-produced Damages -- and thus benefited more from such ancillary revenue as the nearly $2 million an episode in international license fees -- (FX Networks President) Landgraf says he would have kept the drama on the network for more than three seasons.
  What Rose looks at isn't really a new business model in the traditional, pure sense.  What's new, and impacting programming markets, is the realization that shifting cost structures and emerging secondary licensing markets let the traditional models be viable on a much smaller scale.  It's a new cable model in the sense that now those models can work in the smaller audience/revenues scale of cable markets.  And in the sense of presenting new opportunities and impacting program production markets, that's what is important.

Source -  The New Cable Model: Why It's Better to Own Than to Rent (Analysis), The Hollywood Reporter


Neilsen Introduces Eavesdropping Meter

Ratings firm Nielsen is introducing a new form of metering device as it expands local samples.  The new device listens for audio cues embedded in television programs (as well as other content like ads and PSAs) to record viewing behavior - and pushes that data to Nielsen using wireless technologies.  Nielsen hopes that this less intrusive technology (compared to the hard-wired Local People Meters) to expand its sample size in some local markets.
  Nielsen will initially install the devices in three local markets - in homes with LPMs as well as in newly recruited households.  Eventually Nielsen hopes to expand the system to include "audio code" samples in all US television markets, to supplement LPMs, traditional diaries, and data gathered from set-top boxes.
   Audio code readers have been used to record air play of songs and commercial in radio for several years, and to track PSA and promo package use on television channels.  Those systems are typically employed on computers, however, as the archive of audio cues needs to be constantly updated to include audio cues for new program segments.  And the TV ecosphere of programming content and alternatives is much more complex.

  For Nielsen, the viability of the data will depend heavily on whether or not it can reliable record TV use.  There are several specific challenges Nielsen faces in that regard.  First, they have to be able to distinguish and record the full range of TV viewing options, and reliably distinguish among the panoply of TV content. Specifically, Nielsen will need to develop and continuously update an archive of audio cues to use to determine what program is being heard, and have the processing power to recognize and distinguish audio cues - otherwise, content is recorded as being unknown.  Having too many unknowns raises questions about validity and reliability.  Essentially that means Nielsen will be to either work with program producers to embed predetermined audio cues throughout the TV/video industry, or develop a way to continuously update archives of audio cues used to distinguish content.
  If Nielsen only codes for program (and not for individual episodes), then the need for updates is reduced - but it also would mean that the system would not distinguish among episodes - particularly among episodes that are time-shifted.  Is the episode of NCIS heard the most recent, or the cliffhanger from last season?  Is it an episode timeshifted from CBS, or watched on cable channel USA, through a MSO's On-Demand offerings, streamed from CBS.com or other video streaming services, or viewed from a DVD?  If Nielsen wants to be able to differentiate among program source, it needs to use a audio code system that assigns unique codes to each source.
  So a comprehensive, continuously updated set of audio cues is needed - and is needed at the location where the determination of viewing is done.  If that's at the audio code reader - then that device needs the processing power to determine viewing, the storage capacity to hold the archive of audio cues, and the means to continually update that archive.  Or the household audio code reader merely listens, and sends what it hears to a location with the continuously updated archive and processing power to determine what it is hearing.  (And if its this latter case, then there are some really serious privacy issues at play).
  Wherever the coding is done, there are other potential problems with trying to tie viewing to audio cues.
First is the obvious problem that Nielsen's been battling since it started using meters: that hearing an audio cue may mean that the TV's on, but not that anyone is watching (or how many are watching).  If the environment is too noisy, will the sensor be able to differentiate audio cues from background noise?  Then there is the issue of what if the audio cue isn't audible - say if someone is watching and using headphones or for some reason has the sound off,  or bypasses the portion of the program containing the audio cue for one reason or another (switching among programs, starts viewing late, skips or fast forwards timeshifted or streamed content, etc.).  Will the audio coder be able to differentiate between viewing on a TV set and second-screen viewing?  Will Nielsen be willing to place an audio coder in each room containing a TV set in multiset households.
  The audio coder systems used to track song play on radio labels designates programming it doesn't recognize as "unknown."  The questions I raise about Nielsen's implementation of audio coding of audio cues comes down to an issue of how much of viewing behavior is missed or coded unknown by whatever system Nielsen develops.  I can see a good system being able to recognize and record when a program with a recognized cue is on, in normal (traditional) viewing contexts.  What's unclear is this particular system's ability to reliably recognize and record the full panoply of program and viewing options, in less-than-perfect contexts.  I suspect that these are the questions that the Media Rating Council (MRC) will ask before it certifies this new approach to measuring viewing.

Source -  Nielsen's New Local TV System Moves Into Living RoomsTVBlog

US Mobile Market Still Growing

The latest report out of ComScores MobiLens service indicates that mobile device ownership and use continues to grow - 234 million American aged 13 or older used mobile devices (that's more than 90% of that age group).  They also show smartphone penetration growing to 116.5 million in August, a 6% gain from last May.  As a result, smartphones are used by about half of all mobile users, and around 37% of the U.S. population.
  Looking at what mobile users do (other than make calls) - 75.6% sent text messages to other phones, 53.4% used downloaded apps, 52.0% used a browser, 38.7% accessed social networking sites or blogs, 34.0% played games, and 28.3% listened to music on their mobile device.
  On the hardware side, Samsung remained as the top mobile device OEM, but a surge in Apple device ownership moved it into a close third (behind LG).  Android maintained the dominant OS, growing slightly to 52.6% of the U.S. smartphone market. Apple's iOS also showed some growth in market share, being used by more than a third of devices (34.6%).  Those gains primarily came at the expense of RIM's Blackberry system, whose market share dropped precipitously to 8.3% (down from 11.5% share in May).  Microsoft and Symbian also saw their share of the OS market drop.

Source - comScore Reports August 2012 U.S. Mobile Subscriber Market Share, comScore press release

Monday, October 15, 2012

New Life for Phone Booths

In New York, City24x7 is turning old phone booths into Information HotSpots.  Replacing the old landline phone is a large net-connected touchscreen and free Wi-Fi hotspot.  The screen displays real-time information about what's happening nearby - local news, transportation info, reviews of local merchants, bars & restaurants, warnings of weather or environmental hazards and through WiFi, notify authorities of emergencies.  Eventually they want to add Skype functionality
"We’re trying to use technology and new social media to inform, project, and revitalize cities," says City24x7 CEO Tom Touchet.
City24x7 is currently finishing a pilot test in New York, which successfully integrated existing apps social input into a screen that provides information "tailored to your exact location and available in one place."  The pilot also generated a lot of interest from potential advertisers, who can use the displays to send out highly targeted messages.  Soon, the company hopes to install 250 SmartScreens throughout New York, and is already talking about expanding the service to some 20 other cities around the globe.  Once out of pilot status, cities will receive about a third of revenues generated by the screens.

Source -  Turning Old Phone Booths Into Digital Information Hubs, FastCompany Co-Exist

Billboard Incorporates More Digital Info

Earlier this year, Billboard started including data on digital sales and information about a song's online streaming in its Hot 100 singles charts (all genres).  Originally, Billboard relied primarily on radio station air play to measure a song's popularity - and for music genre charts like Country-Western, Latin, and Hip-Hop, it limited its data to radio stations whose exclusive program focus was that genre.  Thus, the genre charts failed to incorporate digital plays and sales, or the success that cross-over artists achieved in other genres.
  Last week, Billboard announced the creation of a new chart for R&B music, to better distinguish that genre from Hip-Hop.  More importantly, though, Billboard changing how it calculated its top singles charts for various genres.  First, it will expand the number of stations included in determining airplay counts.  Then, it will use the hybrid formula developed for the Hot 100 chart to reflect digital online sales of singles, and airplay counts from major streaming services.  This should provide a more inclusive basis for measuring artist and song popularity.
  The recording industry welcomed the change.  Jim Donio, president of the National Association of Recording Merchandisers (NARM), released the following statement.
"With digital downloads hitting record highs and streaming services such as Spotify, Muve, Slacker, Rhapsody, Rdio, MOG, Sony Music Unlimited and Xbox Music continuing to grow, the impact of digital music is growing more and more pronounced... We are happy to see that Billboard recognizes this trend and is taking steps to ensure that its charts will continue to serve as the industry standard well into the future."
 Source - Billboard shakes up charts to include digital, streaming data, CEN Audio blog

It's About Time: BBC Apologizes

Last week the BBC apologized to a group of women for a decades-old sex scandal.
  In the 1970s and 1980s, a number of women claimed they had been sexually abused by a then-prominent BBC entertainer and host Jimmy Savile.
Some women said Savile had abused them when they were as young as 12 and described a culture of sexual abuse inside the BBC at the height of Savile's fame in the 1970s and 1980s. Some also alleged that they had been attacked on BBC premises.
  The story came to light after a documentary on the subject was aired by rival network ITV.  The ITV documentary raised questions about both Savile's conduct and the BBC's handling of the situation at the time and ignited a firestorm of criticism within both media and government.  Criticism was further fueled when it was learned that the BBC's own flagship news program, Newsnight, had been investigating the claims before the program's editors shelved the project last December.
  The BBC said it would "address all questions," but only after the police investigation was concluded.  In the meantime, Director General Entwistle issued an apology to the women involved:
 "The women involved here have gone through something awful and it's something I deeply regret... I would like to apologize on behalf of the organization to each and every one of them for what they have had to endure here."
Source - Britain's BBC apologizes over sex abuse scandal,  Broadcast Newsroom

Friday, October 12, 2012

Internet's News Impacts - Bad and Good(?)

Kaila Colbin has an interesting piece today on what the Internet has meant, and may mean, for the future of journalism. She starts with the argument that

The need for speed is not only killing your company, it is destroying any standards of accuracy and integrity that remain, resulting in, for example, media outlets publishing reports on Supreme Court decisions before said Court has even finished reading said decision.
That's hardly news, though, or solely the fault of the Internet.  From journalism's earliest days, being first with the news has been an important focus. As newer media forms develop, particularly those that can package and disseminate their news products more quickly, the tempo of journalism speeds up.  Before the Internet was widely diffused, cable news networks 24/7 news cycle and potential for real-time coverage of events, critics say, transformed the focus on speed to a critical need for immediacy.  Another outside influence is money - specifically the commercial nature of most news outlets, and the need to generate sufficient customers to subsidize news (directly through subscription or indirectly through advertising or other subsidies).  When money was flush in news, it had little impact on journalism; but digital technology and the Internet expanded competition and shifted revenue streams to the point where most news organizations have shifted from serving audiences to battling for audiences.  And getting and keeping your audience becomes a primary focus.
   The problem for journalism, though, is not the simple facts of a faster news cycle or commercialism - it's the concern that the push to be first and to attract audiences is trumping core journalistic principles of verification and accuracy. Colbin illustrates the concern with a quote from Jeremiah Owyang, an analyst for research firm Forrester.
“…the way today’s world is set up is that 1) whoever publishes first (whether it's accurate or not) 2) whoever is the most interesting (and thereby gets the most social shares and rewarded by Google) wins… the only way for journalism to focus on fact checking and accuracy is to change the business model, and if the two requirements listed above (fast and interesting) get page views (and thereby ad dollars)… this will only persist.”
To be fair, I'd argue that the Internet isn't directly responsible for any decline in journalism, at least not in that way.  In terms of speed and money, the rise of the Internet has provided a platform for greatly expanding the level of competition traditional news outlets face - and pushed a need for them to try to make their news products more competitive, and more valuable to potential news consumers.  But shifting focus to speed and entertainment over traditional news values isn't the only option news organizations have to try to be more competitive or valuable to the market.  In fact, I'd argue that doing so lessens the core news value of their product and makes them less competitive.  For journalism and those organizations that are first and foremost "news outlets," those are bad business practices and business models.  For "gossip" or entertainment outlets, not so bad.
  I would add that I think that the Internet's most significant negative impact on news and journalism - or at least people's perception of news - comes from a different pair of structural features: interactivity and disintermediation.  By opening up the range of news and information sources, the Internet allows news consumers to bypass the traditional gatekeepers of news.  They no longer have to rely on journalists' judgements on what's newsworthy, or more critically, their accuracy.  Internet news consumers can independently authenticate or verify news and information, or its falsehood.  And the more the Internet news consumer finds discrepancies in news judgements and reporting, the less faith and trust they place in journalism and the less value they place in those news products.
  Interactivity contributes to that spiral in a couple of ways.  First, news consumers can themselves become news producers - adding their perceptions of events they witness, or contributing specialized expertise in topics addressed in news reports. Second, when people do so in good faith, the response of the "real journalists" is often dismissively arrogant, and sometimes openly hostile.  That's not going to help build faith and trust.

  But the impact of the Internet on news and journalism does not need to be significantly and relentlessly negative.  For instance, journalists and editors could, in most cases, employ the Internet and digital telecommunication to quickly and easily authenticate and verify information - meaning that those traditional journalistic principles do not necessarily need to be sacrificed at the altar of speed and immediacy. In addition, providing access to background information, original source materials, and other supporting information can add value to the basic news story.
  Journalists could also take advantage of the Net's interactivity to seek out, identify, and work with relevant experts when covering stories outside their own level of expertise. Colbin notes one such effort - The Conversation - that pairs journalists with academic experts to the benefit of both.  Academics get more, and more accurate, reporting on research results, and journalists get access to the expertise to allow them to better evaluate the information they get from outside sources and to provide a fuller understanding of the background and context of breaking news.
  In addition, the Internet offers the potential to explore new business models and funding sources.  The news industry collectively bemoans the fact that online advertising revenues have not yet been sufficient to replace declines in traditional advertising revenues, and are unsure about the potential for subscription or direct purchase revenue potential.  True, but beside the point - online news outlets have a cost advantage in that their distribution costs are minimal.  Besides, online advertising revenues are the fastest growing sector in advertising/marketing spending, and online news outlets are increasingly considering and testing pay/subscription models.  Currently, there are enough successful online pay/subscription models out there to suggest that they could be reliable revenue streams, if the outlets can develop a mix of content, a market segment that recognizes that value, and a pricing scheme that works for that mix.  But the real opportunity of the Internet is that it enables so many other revenues sources and streams, including donations/subsidies, pre-pay models, crowdsourcing, etc.  Earlier this year, the Pew Project for Excellence in Journalism looked at a range of alternative online funding sources.  Colbin discusses the background for Avaaz Daily Briefing, funded through donations from more that 20,000 people.

  Certainly, the Internet, along with the explosion of digital technology and telecommunications, has, and continues to, radically reform media and news markets, principally by radically increasing the options for news consumers and increasing competition for traditional news outlets that, for the most part, hadn't faced intense competition in decades.  But the same forces increasing competition are creating new opportunities for creating and disseminating news and information, for traditional news outlets as well as other individuals and organizations.  Admittedly, not all of these new and alternative outlets will be financially successful, or will produce accurate and trustworthy information.  But there's nothing that would inherently prevent the production and distribution of journalism in its highest and proudest traditions.  In fact, the Internet makes doing that much more possible - it will be up to news organizations to take advantage and compete successfully in the news/journalism market.


Source -  Internet Creates Media Problems, Seeks to Solve ThemOnlineSPIN
The Search for a New Funding Model,  Pew Research Center's Project for Excellence in Journalism report

Supporting Videos
A New Way to Do JournalismAndrew Jaspan discusses The Conversation at TEDxCanberra2012

"End Times" Clip from The Daily Show on NYTimes and decline of print journalism (for fun)

Timeshifting's Ratings Impact

When I recently posted some comments on the cable/broadcast battle to claim supremacy, I noted that same-day timeshifted viewing had grown significantly.  The Hollywood Reporter also noted the change in their review of the start of the Fall 2012 season.  With DVRs (digital video recorders) in about 46% of US TV households, younger viewers in particular seem to be actively embracing the potential to adapt their TV viewing to there own lifestyles, rather than shifting their daily habits to watch TV.
  To back this up, the piece compared ratings for this season among young adults (18-49) that combined live viewing with three days worth of delayed viewing with the traditional ratings based on live viewing.  In aggregate, the 18-49 audience for the four networks primetime premiere week (excluding sports and Saturday) was 41% larger when you included those who timeshifted and watched the program within three days. 
  The article also provided some more specific comparisons in terms of viewing of returning series - NBC's Revolution L+3 viewing was up 50%, making it the most timeshifted program in NBC's history ; CNS's Vegas was 28% higher with timeshifing; ABC's Modern Family added 33% to its rating when including delayed viewing.
  Now, that's just a few, so I went back to the TVB study to crank out some more numbers on the level of timeshifting we're seeing so far this Fall.  The ratings are for the previous week and for the 35-54 demographic, and the report reported delayed or timeshifted viewing within one day of the original airing.  Thus the numbers aren't directly comparable to those above.  The report also reported viewing only for the top primetime programs.  I'll list the programs and the share of one day timeshifted audience
Modern Family 32%;; Big Bang Theory 31%; Grey's Anatomy 28%; NCIS 25%; How I Met Your Mother 31 %; Survivor: Philipines 29%; 2 Broke Girls 26%; New Girl SPL 40%; Voice (Tues) 20%; Once Upon a Time 24%; Amazing Race 21 29%; Mike & Molly 25%; Revenge 25%; Two and a Half Men 20%; New Girl 29%; The Middle 22%; NCIS: Los Angeles 16%; Office 33%; X-Factor (Wed) 19%
The quantitative side will note that this is anecdotal and a limited nonrandom sample, so don't take the precise numbers as gospel.  Still, the numbers show a significant level of timeshifted viewing.
Another caveat was the point made by David Poltrack, chief research officer at CBS - that timeshifting during premiere week may also be driven by people wanting to sample new programs.
From the network perspective, that's good news, "because sampling is what it's all about."

  It seems that DVRs and timeshifting have caught on, at least with some demographic groups,  The levels are high enough to make a strong case for networks and advertisers to put more of an emphasis on using ratings measures that include at least some delayed viewing.  The might also consider the long term impact of delayed viewing on program scheduling strategies.

Source -  DVRs Dramatically Altering Fall TV Battleground, The Hollywood Reporter
Seasons of Premiers: Fall Broadcast and Summer Cable, TVB report

Editted to fix bad link for Hollywood Reporter source. (17Oct2012)