Jmf registry error could not commit

jmf registry error could not commit

Error: Can't open video card 1 java.lang. No matter what i tried, my jmfregistry doesnt show any video capture devices, so i guess thats. Commit must be called to save changes made to the device list by calling addDevice or removeDevice. Throws: java.io.IOException - If the registry could not be. The decoder is pure Java and can be used on any JMF enabled platform. A. At this stage we can not release the encoder due to legal issues involved with.

Jmf registry error could not commit - will

Downloading and Using NXT

Platform-independent binary and source distributions of NXT can be downloaded from Sourceforge at http://sourceforge.net/projects/nite/. For most purposes the binary download is appropriate; the source download will be distinguished by suffix after the version number. For the most up-to-date version of NXT, the SourceForge CVS repository is available. For example

cvs -z3 -d:pserver:nite.cvs.sourceforge.net:/cvsroot/nite co nxt

would get you a current snapshot of the entire NXT development tree.

Before using NXT, make sure you have a recent version of Java installed on your machine: Java 1.4.2_04 is the minimum requirement and Java 1.5 is recommended. Learn about Java on your platform, and download the appropriate version using Sun's Java Pages.

For optimum media performance you may also want to download JMF and the platform-specific performance pack for your OS. NXT comes packaged with a platform-independent version of JMF. Users of MacOS should use the FMJ libraries instead which use QuickTime for media playback for improved performance and easier installation. NXT comes packaged with a version of FMJ compiled specifically with QuickTime support.

  • Step 1: download and unzip nxt_.zip

  • Step 2: Some data and simple example programs are provided to give a feel of NXT. On windows, try double-clicking a file; on Mac, try running a file; on Linux (or Mac) try running a shell script from a terminal e.g. . More details in Sample Corpora section below.

  • Step 3: Try some sample media files: Download signals.zip (94 Mb) and unzip it into the Data directory in your NXT directory. Now when you try the programs they should run with synced media.

Some example NXT data and simple example programs are provided with the NXT download. There are several corpora provided, with at most one observation per corpus, even though in some cases the full corpus can actually consist of several hundred observations. Each corpus is described by a metadata files in the directory, with the data itself in the directory. The Java example programs reside in the directory and are provided as simple examples of the kind of thing you may want to do using the library.

  • single-sentence - a very small example corpus marked up for part of speech, syntax, gesture and prosody. Start the appropriate script for your platform; start the Generic Corpus Display and rearrange the windows. Even though there is no signal for this corpus, clicking the play button on the NITE Clock will time-highlight the words as the time goes by. Try popping up the seacrh window using the Search menu and typing a query like . This searches for left-handed gestures that temporally overlap words. You should see the three results highlighted when you click them.

  • dagmar - a slightly larger example corpus: a single monologue marked up for syntax and gesture. We provide a sample gesture-type coding interface which shows synchronisation with video (please download signals.zip to see this in action).

  • smartkom - a corpus of human compuiter dialogues. We provide several example stylesheet displays for this corpus showing the display object library and synchronization with signal (again, please download signals.zip above to see synchronisation)

  • switchboard - a corpus of telephone dialogues. We provide coders for animacy and markables which are in real-world use.

  • maptask-standoff - This is the full multi-rooted tree version of the Map Task corpus. We provide one example program that saves a new version of the corpus with the part-of-speech values as attributes on the (timed unit) tags, moving them from the attribute of tags that dominate the tags.

  • monitor - an eye-tracking version of the Map Task corpus.

  • ICSI - a corpus of meetings. We provide coders for topic segmentation, extractive summarization etc. The entire meeting corpus consists of more than 75 hours of meeting data richly annotated both manually and automatically.

All of the , and scripts in the NXT download have to set the Java before running an NXT program. To compile and run your own NXT programs you need to do the same thing. The classpath normally includes all of the files in the directory, plus the directory itself. Many programs only use a small proportion of those JAR files, but it's as well to include them all. JMF is a special case: you should find NXT plays media if the contains . However, this will be sub-optimal: on Windows JMF is often included with Java, so you will need no jmf.jar on your at all; on other platforms consult ???.

How to Play Media signals in NXT

NXT plays media using JMF (the Java Media Framework). JMF's support for media formats is limited and it depends on the platform you are using. A list of JMF supported formats is at http://java.sun.com/products/java-media/jmf/2.1.1/formats.html. This list is for JMF 2.1.1, which NXT currently ships with.

There are several ways of improving the coverage of JMF on your platform:

  • Performance packs from Sun - these improve codec coverage for Windows and Linux, and are available from the JMF download page. In particular, note that MPEG format isn't supported in the cross-platform version of JMF, but it is in the performance packs.

  • Fobs4JMF for Windows / Linux / MacOSX is a very useful package providing Java wrappers for the ffmpeg libraries (C libraries used by many media players which have a wide coverage of codecs and formats). Download; information. Make sure you follow the full installation instructions which involve updating the JMFRegistry and amending your .

  • MP3 - There's an MP3 plugin available for all platforms from Sun.

Note

direct playback from DVDs or CDs is not supported by JMF.

NXT comes with a cross-platform distribution of JMF in the directory, and the scripts that launch the GUI samples have this copy of JMF on the classpath. On a Windows machine, it is better to install JMF centrally on the machine and change the script to refer to this installation. This will often get rid of error messages and exceptions (although they don't always affect performance), and allows JMF to find more codecs.

It is a good idea to produce a sample signal and test it in NXT (and any other tools you intend to use) before starting recording proper, since changing the format of a signal can be confusing and time-consuming. There are two tests that are useful. The first is whether you can view the signal at all under any application on your machine, and the second is whether you can view the signal from NXT. The simplest way of testing the latter is to name the signal as required for one of the sample data sets in the NXT download and try the generic display or some other tool that uses the signal. For video, if the former works and not the latter, then you may have the video codec you need, but NXT can't find it - it may be possible to fix the problem by adding the video codec to the JMF Registry. If neither works, the first thing to look at is whether or not you have the video codec you need installed on your machine. Another common problem is that the video is actually OK, but the header written by the video processing tool (if you performed a conversion) isn't what JMF expects. This suggests trying to convert in a different way, although some brave souls have been known to modify the header in a text editor.

<subsection> <title>Media on the Mac</title>

NXT ships with some startup scripts for the Mac platform (these are the .command files) that attempt to use FMJ to pass control of media playing from JMF to the native codecs used by the Quicktime player.

If the FMJ approach fails, you should still be able to play media on your Mac but you'll need to edit your startup script. Take an existing command file as a template and change the classpath. It should contain <directory>lib/JMF/lib</directory> (so jmf.properties is picked up); <file>lib/JMF/lib/jmf.jar</file> and <file>lib/fmj/lib/jffmpeg-1.1.0.jar</file>, but none of the other FMJ files. This approach uses JFFMPEG more directly and works on some Mac platforms where the default FMJ approach fails. It may become the default position for NXT in future.

</subsection>

Programmatic Controls for NXT

This section describes how to control certain behaviours of NXT from the command line.

These switches can be set using Java properties. Environment variables with the same names and values are also read, though properties will override environment variables. Example:

java -DNXT_DEBUG=0 -DNXT_QUERY_REWRITE=true CountQueryResults -c mymeta.xml -o IS1003d -q '($s summ)($w w):text($w)="project" && $s^$w'

This runs the program with query rewriting on in silent mode (i.e. no messages). Setting environment variables with the same names will no longer work .

Java Arguments Controlling NXT Behaviour

=

The expected value is a number between and . : no messages; : errors only; : important messages; : warnings; : debug information. The arguments and are also accepted to turn messages on or off.

Values accepted: or ; defaults to . If the value is , NXT will automatically rewrite queries in an attempt to speed up execution.

Values accepted: or ; defaults to . If the value is , lazy loading will not be used. That means that data will be loaded en masse rather than as required. This can cause memory problems when too much data is loaded.

Values accepted: or ; defaults to . If the value is , the user will be asked for input at all points where there is more than one resource listed in the resource file for a coding that needs to be loaded. The user will be asked even if there are already preferred / forced / defaulted resources for the coding. This should only be used by people who really understand the use of resources in NXT.

A list of strings separated by commas (no spaces). Each string is taken to be the name of a resource in the resources file for the corpus and is passed to forceResourceLoad so that it must be loaded. Messages will appear if the resource names do not appear in the resource file.

A list of strings separated by semi-colons. If any of the strings are coding names in the metadata file, they are used when populating the list of existing annotators for the 'choose annotator' dialog. If no valid coding names are listed, all available annotators are listed.

Compiling from Source and Running the Test Suites

  • Go into the top level directory, decide on a build file to use and copy it to the right directory e.g. . Type to compile ( is perhaps the most useful target to use as it doesn't clean all compiled classes and rebuild the javadoc every time). If there are compile errors, copy the error message into an email and send it to Jonathan or another developer (see the SourceForge members page for emails).

  • Run the test suite(s). The NXT test suite is by no means comprehensive but tests a subset of NXT functionality. To run, you need to have the JUnit jar on your CLASSPATH. Then

    javac -d . test-suites/nom-test-suite/NXTTestScratch.java

    Now run the tests:

    java junit.textui.TestRunner NXTTestScratch

    Again, any errors should be forwarded to a developer.

  • If you are making a real public release, Update the file in the top-level directory, choosing a new minor or major release number. Commit this to CVS.

  • Now build the release using the ant file (use the default target). This compiles everything, makes a zip file of the source, and one of the compiled version for release, and produces the Javadoc. If you're on an Edinburgh machine, copy the Javadoc (in the directory) to . Test the shell script examples, and upload the new release to SourceForge.

202.115.143.185:13226

Relay Type

Relay Type

Relay Type

Relay Type

Relay Type

[email protected] 202.115.143.185: 12030->14192

Track 0 is set to transmit as:

gsm/rtp, 8000.0 Hz, Mono, FrameSize=264 bits

Created RTP session at 12030 to: 202.115.143.185 14192

incomingcall

Initialized

1

1

Initializing…Resolved

C: /172.19.209.239

Best Java code snippets using com.sun.media.util.Registry.commit(Showing top 6 results out of 315)

public JMFInit(String[] args, boolean visible) { super("Initializing JMF..."); this.visible = visible; Registry.set("secure.allowCaptureFromApplets", true); Registry.set("secure.allowSaveFileFromApplets", true); updateTemp(args); try { Registry.commit(); } catch (Exception e) { message("Failed to commit to JMFRegistry!"); } Thread detectThread = new Thread(this); detectThread.run(); }
privatevoid updateTemp(String[] args) { if (args != null && args.length > 0) { tempDir = args[0]; message("Setting cache directory to " + tempDir); try { Registry.set("secure.cacheDir", tempDir); Registry.commit(); message("Updated registry"); } catch (Exception e) { message("Couldn't update registry!"); } } }
public JMFInit(String[] args, boolean visible) { super("Initializing JMF..."); this.visible = visible; Registry.set("secure.allowCaptureFromApplets", true); Registry.set("secure.allowSaveFileFromApplets", true); updateTemp(args); try { Registry.commit(); } catch (Exception e) { message("Failed to commit to JMFRegistry!"); } Thread detectThread = new Thread(this); detectThread.run(); }
public JMFInit(String[] args, boolean visible) { super("Initializing JMF..."); this.visible = visible; Registry.set("secure.allowCaptureFromApplets", true); Registry.set("secure.allowSaveFileFromApplets", true); updateTemp(args); try { Registry.commit(); } catch (Exception e) { message("Failed to commit to JMFRegistry!"); } Thread detectThread = new Thread(this); detectThread.run(); }
privatevoid updateTemp(String[] args) { if (args != null && args.length > 0) { tempDir = args[0]; message("Setting cache directory to " + tempDir); Registry r = new Registry(); try { r.set("secure.cacheDir", tempDir); r.commit(); message("Updated registry"); } catch (Exception e) { message("Couldn't update registry!"); } } }
privatevoid updateTemp(String[] args) { if (args != null && args.length > 0) { tempDir = args[0]; message("Setting cache directory to " + tempDir); Registry r = new Registry(); try { r.set("secure.cacheDir", tempDir); r.commit(); message("Updated registry"); } catch (Exception e) { message("Couldn't update registry!"); } } }

US9414116B2 - Media extension apparatus and methods for use in an information network - Google Patents

This application is a divisional of and claims priority to co-owned U.S. patent application Ser. No. 10/782,680 of the same title filed Feb. 18, 2004, and issued as U.S. Pat. No. 8,078,669 on Dec. 13, 2011, which is incorporated herein by reference in its entirety.

A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.

1. Field of Invention

The present invention relates generally to the field of software applications used on an information network (such as a cable television network), and specifically to the accessibility and control of on-demand and related services at certain electronic devices such as, e.g., set-top boxes used in the network during operation of the software.

2. Description of Related Technology

Software applications are well known in the prior art. Such applications may run on literally any type of electronic device, and may be distributed across two or more locations or devices connected by a network. Often, a so-called “client/server” architecture is employed, where one or more portions of applications disposed on client or consumer premises devices (e.g., PCs, PDAs, digital set-top boxes {DSTBs}, hand-held computers, etc.) are operatively coupled and in communication with other (server) portions of the application. Such is the case in the typical hybrid fiber coax (HFC) or satellite content network, wherein consumer premises equipment or CPE (e.g., DSTBs or satellite receivers) utilize the aforementioned “client” portions of applications to communicate with their parent server portions in order to provide downstream and upstream communications and data/content transfer.

Digital TV (DTV) is an emerging technology which utilizes digitized and compressed data formats (e.g., MPEG) for content transmission, as compared to earlier analog “uncompressed” approaches (e.g., NTSC). The DTV content may be distributed across any number of different types of bearer media or networks with sufficient bandwidth, including HFC, satellite, wireless, or terrestrial. DTV standards such as the OpenCable Application Platform middleware specification (e.g., Version 1.0, and incipient Version 2.0) require that applications be downloaded to CPE from the bearer or broadcast network in real-time. As is well known, the OCAP specification is a middleware software layer specification intended to enable the developers of interactive television services and applications to design such products so that they will run successfully on any cable television system in North America, independent of set-top or television receiver hardware or operating system software choices. OCAP enables manufacturers and retail distributors of set-tops, television receivers or other devices to build and sell devices to consumers that support all services delivered by cable operators.

Multimedia Home Platform (MHP) defines a generic interface between interactive digital applications and the terminals on which those applications execute. This interface decouples different provider's applications from the specific hardware and software details of different MHP terminal implementations. It enables digital content providers to address all types of terminals ranging from low-end to high-end set top boxes, integrated digital TV sets and multimedia PCs. The MHP extends the existing DVB open standards for broadcast and interactive services in all transmission networks including satellite, cable, terrestrial and microwave systems.

Multimedia Home Platform (MHP) Specification 1.0.X contains detailed information on the enhanced broadcasting and interactive profiles, as well as various MHP content formats including PNG, JPEG, MPEG-2 Video/Audio, subtitles and resident and downloadable fonts. MHP 1.0 further provides mandatory transport protocols including DSM-CC object carousel (broadcast) and IP (return channel), DVB-J application model and signaling, hooks for HTML content formats (DVB-HTML application model and signaling), a graphics reference model, and Annexes with DSM-CC object carousel profile, text presentation, minimum platform capabilities, and various APIs. The MHP 1.0 specification provides a set of features and functions required for the enhanced broadcasting and interactive broadcasting profiles. The enhanced broadcasting profile is intended for broadcast (one way) services, while the interactive broadcasting profile supports in addition interactive services and allows MHPs to use the Internet. New profiles will be added later based on the continuing work of the DVB project.

Multimedia Home Platform (MHP) Specification 1.1.X contains further detailing of Interactive and Internet Access Profiles, stored application support, application download via broadcast or interaction channels, DVB-J extensions to better support international applications and smart cards, specification of DVB-HTML, greater support for plug-ins, and support for bi-directional referencing between MHP content and Internet content. MHP 1.1 builds on the MHP 1.0 specification in order to better support the use of the interaction channel and to specify elements which promote interoperability with Internet content.

The Advanced Common Application Platform (ACAP) is a recently developed specification which aims to ensure interoperability between ACAP applications and different implementations of platforms supporting ACAP applications. The architecture and facilities of the ACAP Standard are intended to apply to broadcast systems and receivers for terrestrial (over-the-air) broadcast and cable TV systems. In addition, the same architecture and facilities may be applied to other transport systems (such as satellite).

ACAP is primarily based on GEM and DASE, and includes additional functionality from OCAP. GEM provides a framework for the definition of a GEM Terminal Specification. The ACAP specification builds on GEM by adding specification elements in order to offer a higher degree of interoperability among different environments based on digital TV specifications from ATSC and SCTE.

An ACAP Application is a collection of information which is processed by an application environment in order to interact with an end-user or otherwise alter the state of the application environment. ACAP Applications are classified into two categories depending upon whether the initial application content processed is of a procedural or a declarative nature. These categories of applications are referred to as procedural (ACAP-J) and declarative (ACAP-X) applications, respectively. An example of an ACAP-J application is a Java TV™ Xlet composed of compiled Java™ byte code in conjunction with other multimedia content such as graphics, video, and audio. An example of an ACAP-X application is a multimedia document composed of XHTML markup, style rules, scripts, and embedded graphics, video, and audio.

Application environments are similarly classified into two categories depending upon whether they process procedural or declarative applications. These categories are referred to as ACAP-J and ACAP-X environments, respectively. An example of an ACAP-J environment is a Java Virtual Machine (JVM) and its associated Application Programming Interface (API) implementation. An example of an ACAP-X environment is an XHTML multimedia document browser, also known as a user agent.

In the OCAP, MHP, and ACAP standards, several protocols are defined for accessing broadcast media and files. These generally specify use of the Sun Microsystems Java Media Framework APIs (hereinafter “JMF”). The JMF enables audio, video and other time-based media to be added to applications and applets built on Java technology. This optional package, which can capture, playback, stream, and transcode multiple media formats, extends the Java 2 Platform, Standard Edition (J2SE) for multimedia developers and provides a toolkit to develop scalable, cross-platform technology. The JMF includes a set of components, including the JMF Player API.

With the JMF Player API, programmers can implement support for many audio or video formats by building upon an established media playback framework. In addition, standard implementations provide built-in support for common formats such as muLaw, Apple AIFF, and Microsoft PC WAV for audio, as well as Apple QuickTime video, Microsoft AVI video, and Motion Picture Expert Group's MPEG formats for video. Multimedia playback can also be readily integrated into applets and applications alike with only a limited amount of code.

JMF also allows use of native methods for greater speed, and hence more optimized performance on each platform. At the same time, the common Java Media Player API ensures that applets and standalone applications will run on any Java platform.

A variety of different approaches to implementing media handling and management within networked systems (including use of JMF) are disclosed in the prior art. For example, U.S. Pat. No. 6,092,107 to Eleftheriadis, et al. issued Jul. 18, 2000 and entitled “System and method for interfacing MPEG-coded audiovisual objects permitting adaptive control” discloses a system and method allowing the adaptation of a non-adaptive system for playing/browsing coded audiovisual objects, such as the parametric system of MPEG-4. The system of the invention (programmatic system) incorporates adaptive behavior on top of the parametric system. The parametric system of MPEG-4 consists of a Systems Demultiplex (Demux) overseen by digital media integration framework (DMIF), scene graph and media decoders, buffers, compositer and renderer. The Java Virtual Machine and Java Media Framework (JVM and JMF) are used to various of the aforementioned components. The invention includes a specification of an interfacing method in the form of an application programming interface (API). Hot object, directional, trick mode, transparency and other interfaces are also specified.

U.S. Pat. No. 6,181,713 to Patki, et al. issued Jan. 30, 2001 and entitled “Selectable depacketizer architecture” discloses a scheme that permits the use of a selectable depacketization module to depacketize data streams. An RTP session manager (RTPSM) is responsible for receiving RTP packets from a network and parsing/processing them. A specific depacketizer module is located at runtime depending on the coding decoding scheme (“codec”) used to compress the incoming data stream. A naming convention is followed in order for a specific depacketizer to be located. The depacketizer receives data that has already been parsed and is in a readable form. The depacketizer outputs this data using an interface designed such that it is generic across a number of codecs. The interface passes all relevant information to the decoder where the actual depacketized data stream will be decompressed. The RTPSM need not know of any codec details since the depacketizer handles all codec specific issues. A default format is described for data that is output by a depacketizer. This data is provided to a handler that is aware of this format. Pluggable depacketizer naming and searching conventions are designed according to JMF's player factory architecture, and use the same rules for integrating depacketizers into the RTPSM.

U.S. Pat. No. 6,216,152 to Wong, et al. issued Apr. 10, 2001 and entitled “Method and apparatus for providing plug in media decoders” discloses a method and apparatus for providing plug-in media decoders. Embodiments provide a “plug-in” decoder architecture that allows software decoders to be transparently downloaded, along with media data. User applications are able to support new media types as long as the corresponding plug-in decoder is available with the media data. Persistent storage requirements are decreased because the downloaded decoder is transient, existing in application memory for the duration of execution of the user application. The architecture also supports use of plug-in decoders already installed in the user computer. One embodiment is implemented with object-based class files executed in a virtual machine to form a media application. A media data type is determined from incoming media data, and used to generate a class name for a corresponding codec (coder-decoder) object. A class path vector is searched, including the source location of the incoming media data, to determine the location of the codec class file for the given class name. When the desired codec class file is located, the virtual machine's class loader loads the class file for integration into the media application. If the codec class file is located across the network at the source location of the media data, the class loader downloads the codec class file from the network. Once the class file is loaded into the virtual machine, an instance of the codec class is created within the media application to decode/decompress the media data as appropriate for the media data type.

U.S. Pat. No. 6,631,350 Celi, Jr., et al. issued Oct. 7, 2003 and entitled “Device-independent speech audio system for linking a speech driven application to specific audio input and output devices” discloses a device-independent speech audio system for linking a speech driven application to specific audio input and output devices can include a media framework for transporting digitized speech audio between speech driven applications and a plurality of audio input and output devices. The media framework can include selectable device-dependent parameters which can enable the transportation of the digitized speech to and from the plurality of audio input and output devices. The device-independent speech audio system also can include an audio abstractor configurable to provide specific ones of the selectable device-dependent parameters according to the specific audio input and output devices. Hence, the audio abstractor can provide a device-independent interface to the speech driven application for linking the speech driven application to the specific audio input and output devices.

U.S. Pat. No. 6,631,403 to Deutsch, et al. issued Oct. 7, 2003 and entitled “Architecture and application programming interfaces for Java-enabled MPEG-4 (MPEG-J) systems” discloses an MPEG-J collection of Java application programming interfaces (APIs) with which applications can be developed to interact with the platform and the content. In the context of MPEG-J, the platform is a device like a set-top box or a PC with Java packages conforming to a well-defined Java platform. The Java-based application consists of Java byte code, which may be available from a local source, like a hard disk, or it may be loaded from a remote site over a network. The MPEG-J Java byte code may be available as a separate elementary stream. The MPEG-4 system is the “Presentation engine” of MPEG-J. MPEG-J provides programmatic control through an “Application engine” which enhances the MPEG-4 browser by providing added interactive capability.

U.S. Pat. No. 6,654,722 to Aldous, et al. issued Nov. 25, 2003 and entitled “Voice over IP protocol based speech system” discloses a VoIP-enabled speech server including a JMF interface and speech application which can be configured to communicate with a VoIP telephony gateway server over a VoIP communications path. In operation, the speech application can receive VoIP-compliant packets from the VoIP telephony gateway server over the VoIP communications path. Subsequently, digitized audio data can be reconstructed from the VoIP-compliant packets, and the digitized audio data can be speech-to-text converted. Additionally, text can be synthesized into digitized audio data and the digitized audio data can be encapsulated in VoIP-compliant packets which can be transmitted over the VoIP communications path to the telephony gateway server. The JMF media interface is used to establish a data path for transporting the digital audio data between the speech application and the voice call connection.

United States Patent Application Publication 20020073244 to Davies, et al. published Jun. 13, 2002 entitled “Method and an apparatus for the integration of IP devices into a HAVi network” discloses a method and apparatus for integrating IP devices into a HAVi network. An Internet Protocol (IP) and HAVi compliant device acts as a controller in the HAVi network and communicates with at least one HAVi compliant device using HAVi application programming interfaces (APIs). A server on the controller communicates with at least one IP device having a proxy and an IP and HAVi API. The server includes at least one IP device control module (IP device DCM) corresponding to the IP device. The IP device providing API support to translate and relay calls between the proxy and the server so that at least one HAVi compliant device can communicate with the IP device. In one embodiment, JMF and C++ graphic libraries are used in conjunction with a streaming module to get the stream data and display the stream data.

United States Patent Application Publication 20030037331 to Lee published Feb. 20, 2003 and entitled “System and Method for Highly Scalable Video on Demand” discloses a system and method for providing video on demand including pre-scheduled multicasts of videos as well as dynamically initiated transmissions of the front portion of videos. Users may first receive a dynamically initiated front portion of a video and then be merged into a pre-scheduled multicast. The dynamically initiated transmission is also a multicast. Multiple admission controllers and a single server coordinate the dynamically initiated transmissions for any one video. Preferably, interactive controls are supported without requiring extra server-side resources, and latency is automatically equalized between users admitted via the pre-scheduled and the dynamically initiated transmissions. A user receiving a video via a pre-scheduled multicast does not need to change channels to finish receiving the video transmitted. Client applications implemented using the Java programming language and the Java Media Framework (JMF) are also disclosed.

In the aforementioned OCAP, MHP and ACAP standards, several protocols are defined for accessing broadcast media and files. These protocols are indicated in string form, and are encapsulated in the standards using a Locator object that contains the protocol and any other terms necessary to identify a service and its elements. In each standard, the protocols must be supported by JMF. The JMF MediaHandler understands the content format of the media associated with a protocol string, and the JMF DataSource understands the actual messaging and packet protocol associated with a protocol string.

OCAP, for example, allows an application to extend the given protocols in an application-specific fashion. This is performed by calling the JMF PackageManager “set-prefix” methods. Setting the prefixes to provide an extended protocol is defined by JMF, which allows changes made by the “set” methods to be made persistent by providing commit-prefix methods.

However, neither OCAP nor the other prior art approaches described above allow an application to call these commit-prefix methods and make them persistent. This means when more than one application needs to add the same protocol, each application must perform a redundant set prefix process, or communicate with an application that has set the prefixes using Inter-Xlet communications (IXC), as defined by MHP 1.0.2 and complied with in OCAP 1.0.

Accordingly, there is a need for improved apparatus and methods for providing network-specific services in a standards compliant fashion. Such improved apparatus and methods would ideally utilize existing media handling infrastructure (e.g., the JMF APIs or comparable) to enables a network specific protocol for handling various services within the device, such as video on-demand (VOD). Such improved apparatus and methods would also ideally permit an MSO or other entity to add the network-specific protocols to the device such that comparable services (e.g., VOD) could operate on CPE within a number of heterogeneous networks.

The present invention addresses the foregoing needs by disclosing an improved on-demand apparatus and associated methods.

In a first aspect of the invention, an improved method of operating client equipment in a content-based network is disclosed. The method generally comprises: receiving at the client equipment an application configured to implement a network-specific protocol; storing the application within a storage device of the client equipment; running the application to configure the equipment according to the network-specific protocol; and operating the CPE and the application to provide on-demand services to a user. In one exemplary embodiment, the client equipment comprises CPE within an HFC cable network compliant with the OCAP, ACAP, and/or MHP standards and running Java middleware. The downloaded application is configured to define one or more protocol-specific locators within the CPE which provide persistent access to the various media interfaces (e.g., JMF) by one or more on-demand applications resident on the CPE.

In a second aspect of the invention, an improved method of operating client equipment adaptable for use in any one of a plurality of different content-based networks within a particular content-based network is disclosed. The method generally comprises: receiving at the equipment an application configured with a protocol extension, the protocol extension being adapted for use in the particular network within which the client equipment operates; running the application to configure the equipment according to the protocol; and selectively allowing at least one application resident on the equipment to access the extension, the at least one application having attributes specific to the particular network. In the exemplary embodiment, the CPE comprises an OCAP, ACAP, and/or MHP compliant DSTB which can operate in any number of different MSO networks. The present invention permits this “universal” CPE to be configured by a particular MSO or other entity when the CPE is used in their network, such configuration including installation of network- or MSO-specific protocols which support on-demand services.

In a third aspect of the invention, an improved method of developing the specific protocol useful for delivery of content from a first node of a network to a second node thereof. The method generally comprises: developing a first component adapted to communicate between the first and second nodes; developing a second component adapted to process the content delivered to the second node; and developing a third component adapted to cooperate with at least one of the first and second components to control functions specific to the protocol. In one exemplary embodiment, the first component comprises a Java DataSource, the second a Java MediaHandler/Player, and the third a control adapted to control functionality associated with an on-demand application (e.g., play, rewind, pause, etc. for a VOD application). The OD application accesses these components via a network-specific protocol.

Similarly, in a fourth aspect of the invention, improved CPE adapted for operation within a content-based network, the CPE comprising a software application adapted for providing on-demand services to a user using a network-specific protocol, is disclosed. The application generally comprises: a first software component adapted to communicate between the CPE and another entity of the network; a second software component adapted to process the content delivered to the CPE; and a third software component adapted to cooperate with at least one of the first and second components to control functions specific to the protocol.

In a fifth aspect of the invention, an improved method of implementing a network-specific on-demand application within the CPE of the network is disclosed. The method generally comprises: developing a plurality of media interface components adapted to implement a network-specific protocol; disposing the plurality of components within a software application to produce a configured application; running the configured application on the CPE; and defining at least one path to the media interface components, the at least one path and media interface components cooperating to provide network specific on-demand services.

In a sixth aspect of the invention, an improved apparatus adapted for operation within a multi-channel HFC cable distribution network is disclosed. In one exemplary embodiment, the apparatus comprises a DSTB (or TV with integrated DSTB hardware) having: a digital processor; a mass storage device operatively coupled to the processor; OCAP-compliant middleware adapted to run on the processor; and at least one software application adapted to run on the processor, the at least one application having a plurality of developed components within its application directory hierarchy; wherein the DSTB is further configured to run the application and configure at least one path to at least one of the developed components. This path is then utilized by one or more on-demand applications to access and provide services.

In a seventh aspect of the invention, a method of utilizing CPE compatible for use on a variety of different cable networks within any given one of the networks is disclosed. The method generally comprises: disposing the CPE within the given one network to be in operative communication with another network entity; transferring a software application onto the device from the network entity, the software application being configured to implement a network-specific protocol, the network-specific protocol implementing one or more network-specific on-demand services; and running the at least one software application on the device, the running configuring at least one path within the CPE to permit access of the network-specific on-demand services by a user.

In an eighth aspect of the invention, an improved head-end apparatus adapted for providing a network-specific on-demand application to CPE of the network is disclosed. The apparatus generally comprises: at least one computer in communication with the network, and at least one computer program adapted to develop a specific protocol useful in implementing the on-demand application according to the method comprising: developing a first component adapted to communicate between the head-end and the CPE; developing a second component adapted to process the content delivered to the CPE; and developing a third component adapted to cooperate with at least one of the first and second components to control functions specific to the on-demand application.

In a further aspect of the disclosure, a method of operating client equipment in operative communication with a content distribution network is disclosed. In one embodiment, the method includes: (i) receiving at the client equipment a first application configured to implement a network-specific protocol, the first application including one or more media-based interfaces; (ii) storing the first application within a storage device of the client equipment; (iii) causing the client equipment and the first application to configure at least one path to the one or more media-based interfaces within the client equipment; (iv) the at least one path allowing an on-demand application configured according to a client equipment-specific protocol resident on the client equipment to communicate with a head-end entity via the one or more media-based interfaces according to the network-specific protocol using the client equipment-specific protocol; and (v) operating the client equipment and the first application to provide on-demand services to a user. In one variant, the on-demand application is enabled to make use of the one or more media-based interfaces via a signed certificate of permission from a network entity.

In a further aspect of the disclosure, client equipment is disclosed. The client equipment may include middleware in operative communication with a content distribution network. In one embodiment, the client equipment includes: a storage device; and a processor. The processor may be implemented to run at least one computer program thereon, the at least one computer program including a plurality of instructions.

In one variant, the plurality of instructions may be configured to, when executed by the processor: (i) receive a software packet configured to enable a plurality of applications resident on the client equipment to provide network services according to a network specific protocol; (ii) store the first application within the storage device; (iii) run the software packet to configure the client equipment according to the network-specific protocol; and (iv) selectively allow the plurality of applications resident on the client equipment to access and utilize the software packet via one or more application programming interfaces (APIs), the selective allowance based upon a determined permission of a respective trusted monitor application of the plurality of applications to access and utilize the software packet.

In yet a further aspect of the disclosure, a method of operating consumer premises equipment (CPE) adaptable for use in any one of a plurality of different content distribution networks within a particular content distribution network is disclosed. The CPE includes an application configured with an extension to a protocol, the extension being adapted for use in the particular content distribution network. In one embodiment, the method includes: (i) running the application to configure the CPE according to the protocol, the application including a plurality of components, the plurality of components having prefixes; (ii) selectively allowing at least one second application resident on the CPE to access the extension, the at least one second application configured to utilize device-specific protocols; and (iii) enabling services provided by the particular content distribution network based on the selective allowance, the enablement allowing the at least one second application the services according to first network-specific protocols using the device-specific protocols. The at least one second application may utilize a Java virtual machine (JVM) and the prefixes to access the extension.

These and other aspects of the invention shall become apparent when considered in light of the disclosure provided below.

Reference is now made to the drawings wherein like numerals refer to like parts throughout.

As used herein, the term “application” refers generally to a unit of executable software that implements theme-based functionality The themes of applications vary broadly across any number of disciplines and functions (such as e-commerce transactions, brokerage transactions, mortgage interest calculation, home entertainment, calculator etc.), and one application may have more than one theme. The unit of executable software generally runs in a predetermined environment; for example, the unit could comprise a downloadable Java Xlet™ that runs within the JavaTV™ environment.

As used herein, the term “computer program” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.) and the like.

As used herein, the term “middleware” refers to software that generally runs primarily at an intermediate layer in a software or protocol stack. For example, middleware may run on top of an operating system and platform hardware, and below applications.

The term “component” refers generally to a unit or portion of executable software that is based on a related set of functionalities. For example, a component could be a single class in Java™ or C++. Similarly, the term “module” refers generally to a loosely coupled yet functionally related set of components.

As used herein, the term “process” refers to executable software that runs within its own CPU environment. This means that the process is scheduled to run based on a time schedule or system event. It will have its own Process Control Block (PCB) that describes it. The PCB will include items such as the call stack location, code location, scheduling priority, etc. The terms “task” and “process” are typically interchangeable with regard to computer programs.

A server process is an executable software process that serves various resources and information to other processes (clients) that request them. The server may send resources to a client unsolicited if the client has previously registered for them, or as the application author dictates.

As used herein, the term “DTV Network Provider” refers to a cable, satellite, or terrestrial network provider having infrastructure required to deliver services including programming and data over those mediums.

As used herein, the terms “network” and “bearer network” refer generally to any type of telecommunications or data network including, without limitation, hybrid fiber coax (HFC) networks, satellite networks, telco networks, and data networks (including MANs, WANs, LANs, WLANs, internets, and intranets). Such networks or portions thereof may utilize any one or more different topologies (e.g., ring, bus, star, loop, etc.), transmission media (e.g., wired/RF cable, RF wireless, millimeter wave, optical, etc.) and/or communications or networking protocols (e.g., SONET, DOCSIS, IEEE Std. 802.3, ATM, X.25, Frame Relay, 3GPP, 3GPP2, WAP, SIP, UDP, FTP, RTP/RTCP, H.323, etc.).

As used herein, the term “head-end” refers generally to a networked system controlled by an operator (e.g., an MSO or multiple systems operator) that distributes programming to MSO clientele using client devices. Such programming may include literally any information source/receiver including, inter glia, free-to-air TV channels, pay TV channels, interactive TV, and the Internet. DSTBs may literally take on any configuration, and can be retail devices meaning that consumers may or may not obtain their DSTBs from the MSO exclusively. Accordingly, it is anticipated that MSO networks may have client devices from multiple vendors, and these client devices will have widely varying hardware capabilities. Multiple regional head-ends may be in the same or different cities.

As used herein, the terms “client device” and “end user device” include, but are not limited to, personal computers (PCs) and minicomputers, whether desktop, laptop, or otherwise, set-top boxes such as the Motorola DCT2XXX/5XXX and Scientific Atlanta Explorer 2XXX/3XXX/4XXX/8XXX series digital devices, personal digital assistants (PDAs) such as the Apple Newton®, “Palm®” family of devices, handheld computers such as the Hitachi “VisionPlate”. Dell Axim X3/X5, personal communicators such as the Motorola Accompli devices, Motorola EVR-8401, J2ME equipped devices, cellular telephones, or literally any other device capable of interchanging data with a network.

Similarly, the terms “Consumer Premises Equipment (CPE)” and “host device” refer to any type of electronic equipment located within a consumer's or user's premises and connected to a network. The term “host device” refers generally to a terminal device that has access to digital television content via a satellite, cable, or terrestrial network. The host device functionality may be integrated into a digital television (DTV) set. The term “consumer premises equipment” (CPE) includes such electronic equipment such as set-top boxes, televisions, Digital Video Recorders (DVR), gateway storage devices (Furnace), and ITV Personal Computers.

As used herein, the term “network agent” refers to any network entity (whether software, firmware, and/or hardware based) adapted to perform one or more specific purposes. For example, a network agent may comprise a computer program running in server belonging to a network operator, which is in communication with one or more processes on a CPE or other device.

As used herein, the term “DOCSIS” refers to any of the existing or planned variants of the Data Over Cable Services Interface Specification, including for example DOCSIS versions 1.0, 1.1 and 2.0. DOCSIS (version 1.0) is a standard and protocol for internet access using a “digital” cable network. DOCSIS 1.1 is interoperable with DOCSIS 1.0, and has data rate and latency guarantees (VoIP), as well as improved security compared to DOCSIS 1.0. DOCSIS 2.0 is interoperable with 1.0 and 1.1, yet provides a wider upstream band (6.4 MHz), as well as new modulation formats including TDMA and CDMA. It also provides symmetric services (30 Mbps upstream).

The term “processor” is meant to include any integrated circuit or other electronic device (or collection of devices) capable of performing an operation on at least one instruction including, without limitation, reduced instruction set core (RISC) processors, CISC microprocessors, microcontroller units (MCUs), CISC-based central processing units (CPUs), and digital signal processors (DSPs). The hardware of such devices may be integrated onto a single substrate (e.g., silicon “die”), or distributed among two or more substrates. Furthermore, various functional aspects of the processor may be implemented solely as software or firmware associated with the processor.

As used herein, the term “on-demand” refers to any service or condition invoked or initiated, either directly or indirectly, by a user, customer, individual, or entity (or their proxy), and includes without limitation VOD (video on demand), near-VOD or NVOD (i.e., where a request incurs a delay at the server or other entity prior to commencement of service, including so called “staggered multicast”), MOD (movies on-demand), NPVR (network personal video recorder), and COD (commerce on-demand).

As used herein, the term “user interface” or UI refers to any human-system interface adapted to permit one- or multi-way interactivity between one or more users and the system. User interfaces include, without limitation, graphical UI, speech or audio UI, tactile UI, and even virtual UI (e.g., virtual reality).

Overview



Class CaptureDeviceManager

java.lang.Object

Best Java code snippets using com.sun.media.util.Registry.set(Showing top 7 results out of 315)

public JMFInit(String[] args, boolean visible) { super("Initializing JMF..."); this.visible = visible; Registry.set("secure.allowCaptureFromApplets", true); Registry.set("secure.allowSaveFileFromApplets", true); updateTemp(args); try { Registry.commit(); } catch (Exception e) { message("Failed to commit to JMFRegistry!"); } Thread detectThread = new Thread(this); detectThread.run(); }
privatevoid updateTemp(String[] args) { if (args != null && args.length > 0) { tempDir = args[0]; message("Setting cache directory to " + tempDir); try { Registry.set("secure.cacheDir", tempDir); Registry.commit(); message("Updated registry"); } catch (Exception e) { message("Couldn't update registry!"); } } }
public JMFInit(String[] args, boolean visible) { super("Initializing JMF..."); this.visible = visible; Registry.set("secure.allowCaptureFromApplets", true); Registry.set("secure.allowSaveFileFromApplets", true); updateTemp(args); try { Registry.commit(); } catch (Exception e) { message("Failed to commit to JMFRegistry!"); } Thread detectThread = new Thread(this); detectThread.run(); }
public JMFInit(String[] args, boolean visible) { super("Initializing JMF..."); this.visible = visible; Registry.set("secure.allowCaptureFromApplets", true); Registry.set("secure.allowSaveFileFromApplets", true); updateTemp(args); try { Registry.commit(); } catch (Exception e) { message("Failed to commit to JMFRegistry!"); } Thread detectThread = new Thread(this); detectThread.run(); }
Registry.set("allowLogging", true); Registry.set( "secure.logDir", new File(scHomeDir, "log").getPath()); Registry.set( "adaptive_jitter_buffer_" + suffix, cfg.getString(prop));
privatevoid updateTemp(String[] args) { if (args != null && args.length > 0) { tempDir = args[0]; message("Setting cache directory to " + tempDir); Registry r = new Registry(); try { r.set("secure.cacheDir", tempDir); r.commit(); message("Updated registry"); } catch (Exception e) { message("Couldn't update registry!"); } } }
privatevoid updateTemp(String[] args) { if (args != null && args.length > 0) { tempDir = args[0]; message("Setting cache directory to " + tempDir); Registry r = new Registry(); try { r.set("secure.cacheDir", tempDir); r.commit(); message("Updated registry"); } catch (Exception e) { message("Couldn't update registry!"); } } }

Think, that: Jmf registry error could not commit

APB ERROR REPORT
Jmf registry error could not commit
VUZE CONNECTION ERROR SOCKETEXCEPTION CONNECTION RESET
SUB-PROCESS DPKG RETURNED ERROR
jmf registry error could not commit

0 Comments

Leave a Comment