TechNews Pictorial PriceGrabber Video Fri Nov 29 05:42:20 2024

0


Writing Graphics Software Gets Much Easier: New Programming
Source: EurekAlert!


Image-processing software is a hot commodity: Just look at Instagram, a company built around image processing that Facebook is trying to buy for a billion dollars. Image processing is also going mobile, as more and more people are sending cellphone photos directly to the Web, without transferring them to a computer first.

"Up to now, Internet research has primarily focused on speeding up transmission by increasing bandwidth so that more data can be transferred at a given time," explains Andreas Petlund of Simula Research Laboratory in Oslo.

The most common Internet protocol for transmitting data, TCP, works by apportioning available bandwidth among the users present at any given time. The downside is that this can cause latency, or delay, in data transmissions. For time-dependant applications such as Internet telephony and online gaming, time lags as short as a few hundred milliseconds can create big problems.

Aiming to reduce latency

"In real-time gaming against other players online, data is transmitted only when an action such as moving around or shooting at someone is performed. The same principle applies for stock market programs when placing orders or requesting share prices, for example, via the trading systems in use by Oslo Børs, the Norwegian Stock Exchange. In such cases it is essential to avoid any delay," says Dr Petlund.

Applications like these often generate what are called thin data streams. With thin streams only small amounts of data are transmitted at a time and there can be extended periods between data packages.

According to Andreas Petlund, thin streams cannot compete with greedy traffic for bandwidth. Thin streams almost invariably come up short against greedy traffic and users are left to cope with the resulting lag.

As part of a new research project funded under the Research Council of Norway's large-scale programme on Core Competence and Value Creation in ICT (VERDIKT), researchers are working to reduce latency as much as possible.

"We want a more balanced Internet where thin streams don't always lose out. This can be achieved by adding speed to the mix, instead of only thinking about maximising throughput," says Dr Petlund.

New approaches

Network researchers are now planning to use simulation and modelling to learn more about the network behaviour of thin data streams. According to Dr Petlund, neither this nor the behaviour of data streams in competition with other traffic has ever been studied in depth.

The primary obstacle lies in the vast complexity of the systems making up the Internet. "We may thoroughly understand each individual mechanism or sub-protocol under controlled conditions, but in the Internet jungle it is rather like putting something into a black box without knowing what's going to come out the other end," he explains.

"This happens because the Internet is a shared resource and we have no control over what everyone else is using it for."

One of the partners the Norwegian researchers will be working with is Dr Jens Schmitt of the University of Kaiserslautern. Dr Schmitt is working on the development of mathematical models of network behaviour and testing the extent to which the models provide a good picture of reality.

"We also have some researchers from the US on the team," Dr Petlund adds. "In collaboration with the Cooperative Association for Internet Data Analysis (CAIDA) in San Diego, a leader in the field of Internet analysis, we are going to perform measurements and analyses to find out what percentage of all data streams are thin streams. No such data exists anywhere today."

Pushing for standardisation

Researchers are also employing more traditional research methods in order to study how thin streams behave both in test networks in the laboratory and when they are transmitted via the Internet.

One desired outcome is a standardised mechanism for handling thin data streams through the Internet Engineering Task Force (IETF).

"We won't be able to establish a standard unless we can prove that one is really needed. That is why we first need to measure the prevalence of thin streams," says Dr Petlund.

It is also essential to find out if prioritising thin data streams on the Internet has any negative consequences on other traffic. If this turns out to be the case, then the current use of so many different transmission technologies will pose a formidable challenge.

"At one time everyone connected to the Internet by means of a cable. Now we have a wide array of alternatives such as WiFi, 3G, 4G, WiMax, ADSL and fibre-optic connections -- all of which behave differently. We must come up with solutions that are optimal for everyone," Andreas Petlund affirms.

Better online computer games

It was an interest in computer games that originally inspired researchers at Simula to study systems supporting time-dependent applications ahead of most of the rest of the field.

Andreas Petlund has previously worked on improvements at the operating system level to decrease latency arising from package loss. Users of Linux are benefiting from the resulting technology.

The large Norwegian computer game company, Funcom, has integrated these improvements into a number of their games servers. The technology has been tested on their highest-profile game, Age of Conan, and will be used for The Secret World, soon to be released.

Facts about data packages and network latency

In order to transmit large amounts of data over the Internet as efficiently as possible, a steadily increasing amount of data is sent until maximum bandwidth capacity is reached. The amount of data then stabilises so that bandwidth usage is optimised.

The Internet transmission protocol most used today, TCP, works by dividing data into packages. Queuing systems are used to transmit data packages. All data streams destined to go between given nodes in the Internet can be found in these queues.

If the queues fill up entire data packages are removed from the queue. These packages are then lost.

In order to determine which packages have actually arrived at the destination, the originator requests delivery confirmation for each package. If too much time elapses before a response is received the package is transmitted anew, resulting in network lag.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |