The amount of video is growing rapidly across industries. A recent IDC report projects that 79.4 zettabytes of data will be created by connected Internet of things (IoT) devices by 2025, mostly generated by video applications.1
Harnessing the full potential of this content deluge is no small task. Manual video analysis is a slow, labor-intensive process that is prone to errors and often not feasible due to lack of resources.
Failure to fully optimize content or effectively monitor and respond to live video can result in missed opportunities for a variety of industries. Below is a sampling of how object detection in video and overall video analytics technology is impacting specific industries:
The eighth edition of the FIFA Women’s World Cup™ is well underway, with teams from 12 countries battling it out for the championship title. While millions of soccer fans stay tuned to the excitement in France, IBM is teaming up with FOX Sports to help transform production of the event by infusing AI analysis and streaming into its coverage of The Beautiful Game.
The pursuit of excellence is as captivating as it is timeless. That’s why millions of people will turn their eyes to Augusta, Georgia, from April 11 – 14, as a field of elite competitors vie for one of golf’s most prestigious accolades – the Green Jacket awarded to the Masters 2019 champion.
For more than 20 years, IBM and Augusta National Golf Club have worked together to invite patrons around the world onto the sport’s most hallowed ground through innovative digital experiences. This partnership is rooted in each organization’s shared desire to preserve and expand golf’s most unique experience through advanced technology. From developing the event’s first website and mobile application to creating AI-generated highlights and analysis, IBM has used industry-leading solutions to present the pristine timelessness that is the Masters Tournament to millions of viewers around the world.
When news breaks, television news crews do what they do best: hustle to the scene to get the word out quickly, accurately and often under daunting conditions.
Their work has enormous impact: Even in a new era of instant-access to digital news on the Internet, television remains a go-to resource. The September 2017 State of the News survey by the Pew Research Center found more people get their news from television than any other source. What’s more, Pew found most of those TV news viewers get their news from their local TV stations and their companion websites.
Understanding the scope and social impact of TV news helps to explain why it’s disappointing to news directors and station managers that coverage isn’t always accurate and available for a significant share of the audience – people who rely on written text, not spoken language, to know what’s happening. To highlight this, we cover the importance of making accessible TV possible, even for live television content, through advancements happening around automation thanks to AI (artificial intelligence).
For more depth on the topic of using AI for captions, also download this white paper which goes over some of the solutions available from the Weather Company and IBM Watson Media: Captioning Goes Cognitive.
Curious on the artificial intelligence capabilities of the IBM Watson Media solutions for managing video content, but looking for a way to develop them into existing workflows?
APIs are available to integrate IBM Watson Video Enrichment and IBM Watson Captioning into other applications, such as existing dashboards and interfaces. This includes both generating metadata using the artificial intelligence and managing training the AI to be better attuned to a use case. In addition, the APIs are launching with new, additional features, some currently unavailable elsewhere.
Looking for live broadcast closed captioning solutions?
IBM Watson Captioning offers a service for broadcast television to caption their live content. This uses a combination of artificial intelligence in the cloud and hardware on location. For the on-premise component, the Watson Live Captioning RS-160 is hardware created specifically for this use case by the Weather Company to complement the Captioning service. For accuracy, the AI can be trained in advance, expanding both vocabulary and relevant, hyper-localized context through providing corpus.
This delivers a solution that can not only be highly accurate, but one that is both scalable and built for high availability.
As fans around the world get ready to head to Russia for the 2018 FIFA World Cup this June, FOX Sports and IBM are launching a historical AI collaboration across multiple FOX Sports properties and programming— the first of its kind for the broadcaster.
Beginning with the 2018 FIFA World Cup, FOX Sports is tapping IBM Watson Media’s specialized AI video technology and IBM iX’s proven expertise in designing user experiences to streamline production workflows to quickly classify, edit and access match highlights in near real-time. The advancements to production and distribution will enable FOX Sports to curate engaging video clips and match highlights so that sports enthusiasts back home don’t miss a single play, penalty kick, or goal.
How does automated closed captioning work? What elements improve or impact the accuracy for artificial intelligence (AI) driven captioning?
This article examines why automating caption generation is important before diving into how speech recognition and other elements combine to provide an accurate experience. This includes many behind the scenes aspects that go into how AI approaches the task of transcribing audio. The article then concludes with a few tips to keep in mind when looking for a solution that automates closed captioning.
Looking for a way to speed up the generation of accurate captions? Interested in AI vocabulary training?
Earlier, IBM introduced Watson Captioning to generate captions for videos using speech to text. These captions could then be edited for accuracy, or to adhere to personal preferences. Those capabilities are being expanded with the addition of the ability for Watson to learn based on those edits or to be taught. As a result, this can speed up the process of accurate caption generation through removing previously repeated tasks.
Note that this feature for Watson to learn based on edits or to be manually taught is currently only available from the stand alone Watson Captioning solution. It is not currently available for Streaming Manager or Streaming Manager for Enterprise, although will be coming to those in the future.
With the news moving at lightning speeds, consumers are more tuned into current events than ever while media companies are challenged to keep pace. Broadcast networks are under intense pressure to respond quickly to breaking news, world events, and sporting games in order to satisfy consumer demand for instant, quality digital experiences.
However, delivering accurate captions for live broadcast is both time and resource intensive for broadcast networks, given that production teams must manually transcribe live programming in real-time – which often leads to delayed or incorrect captions. To solve these challenges, IBM launched Watson Captioning – a flexible, scalable solution that leverages AI to automate the captioning process and uses machine learning to improve accuracy over time. As outlined in this white paper, Captioning Goes Cognitive: A New Approach to an Old Challenge, Watson is bringing greater context to video assets while removing some of the challenge associated with closed captioning.
Through its Live Captioning functionality, Watson Captioning empowers closed captions for broadcast networks, unlocking value from live video content and optimizing the viewer experience. By accurately captioning live video content, broadcasters can provide premium experiences for local viewers, increase accessibility for the hearing-impaired community, and adhere to compliance standards.