1. Number of study hours: 15

2. Short description of the course: The course describes the principles, the international standards and the main technologies in the field of wireless and ubiquitous computing. In particular, it discusses: a. the concept of the digital convergence between information and communication technologies that is the basis for wireless and ubiquitous computing; b. the characteristics of the typical multimedia and mobile components; c. the principles of wireless communication; d. the major standards and protocols for wireless networks (from personal to global networks).

3. Target groups The project objectives directly address the promotion of high knowledge and skills standards in IT area and in particular provide an innovative approach to training. The first target group consists of IT students (vocational school IT basic level training and the first courses of colleges and universities) in technology area and IT practitioners not having vocational certificates yet.

4. Prerequisites: The course assumes a basic knowledge of computer networks. In particular, it requires the learner to have knowledge of the basic architectures, standards and protocols for wired networks, as well as of the Internet and the Web and related main technologies. To this aim, learners of this course are required to have attended the previous EUCIP courses in the C (Operate) area from C.1 to C.4.

5. Aim of the course - learning outcomes At the end of this module, the learner will be able to: a. Understand the concept of the digital convergence between information and communication technologies; b. Distinguish between media types; c. Recognise the main I/O devices; d. Understand the basic characteristics of IP telephony; e. Recognise the typical electronic devices for mobility; f. Understand the concept of modulation and know of the main modulation techniques for wireless networks; g. Describe the characteristics of the three main technologies for wireless communication (infrared, radio waves, microwaves), and their typical application domains; h. Know a taxonomy of wireless networks (from personal to satellite networks); i. List the main international standards for wireless networks; j. Describe the architecture of wireless local networks and of satellite networks; k. Describe main limitations and compatibility issues of wireless networks; l. Know the characteristics of the main protocols for mobile stations (Bluetooth, Mobile IP, WAP).


C.5.1 Digital ConvergenceEdit

C.5.1.1. Technologies that communicate information to human sensesEdit

Modern multimedia applications enable complex audio/video interaction between the user and the system. Besides traditional peripherals, such as monitors, keyboards and mice, modern computer systems include additional types of input and output peripherals, for video and audio signals. As for input peripherals, webcams are used for instance in videoconference applications, along with other peripherals for acquisition of sounds (such as microphones). Video acquisition boards have almost disappeared, and have been replaced by high-speed connections (e.g., Firewire), due to the increasing deployment of digital photo- and video-cameras. The explosion of broad band Internet connections, finally, has fostered IP-based telephony, which can be accessed by traditional headphones and microphones connected to the PC, or directly by an IP digital phone. As for output peripherals, video peripherals include, as a basic example, the monitor. Traditional cathode ray tube (CRT) monitors are being increasingly replaced by TFT (Thin Film Transistor) monitors (or flat screen monitors), that enable a more stable and uniform image. In multimedia applications, however, traditional monitors retain some advantages, since TFT monitors (at least the cheaper models) suffer from image persistence (images produce a shadow on the screen after being displayed), which may be annoying for playing video contents. Moreover, in TFT monitors the level of brightness depends on the viewing angle. For audio contents, advanced systems are equipped with dedicated processors and allow reproduction through more than one channel, often supporting High-Fidelity sound diffusion.

C.5.1.2. Digital convergenceEdit

Digital convergence is the term for the trend toward storing and using all kinds of information in digital formats. Music’s transition to MP3 is one example. As more types of information become digital, the tools used to manage them converge toward the computer and the internet. Digital convergence also refers to the pervasive nature of digital technologies. A noticeable and familiar example are the so-called home theater systems, i.e., systems conceived for reproducing, at home, multimedia contents with a level of quality close to that of cinemas. An important component is the decoder which, in addition to processing sound data, also acts as the center connected to all other devices (speakers, subwoofer, etc). “Surround” systems have at least four speakers and two different channels, used for delivering audio data to front and back speakers, respectively.

C.5.1.3. Media types and standard formatsEdit

A multimedia file is made of a set of elementary images and/or audio samples processed in subsequent instants of time. In order to make such samples intelligible and to let the original signal be reconstructed, both the way the samples are stored and how they relate to each other need to be specified. Such information items are usually stored along with the samples themselves and constitute the so-called file format. Since multimedia files made of a sequence of samples normally require large memory sizes, video and audio data are often compressed before storing them. This function is accomplished by codecs (COmpressors DECompressors), typically implemented as software modules and used to compress the incoming multimedia stream from an external source, process and play it. Audio file formats can be categorized as sampled audio formats and synthetic audio formats. A sampled audio file format is obtained by instantaneously converting the physical sound into a series of digital values. The standard format for such files is the WAVE format. WAVE file names use the .wav extension. The quality of the sound is influenced by: - The time interval between two consecutive samples (resolution); - Mono or stereo recording (in the latter case, the sound played by right speakers will be different from the sound played by left speakers); - The number of bits used for each sample (e.g., audio CD-ROMs use 16 bits to store each sample). A synthetic audio file does not use samples, but describes the audio content by means of commands such as “play the C note for 2 milliseconds on a guitar”. The standard format for such files is MIDI (Musical Instrument Digital Interface). MIDI file names use the .mid or .midi extension. The quality of a MIDI file is far lower than that of a WAVE file, that however are much lager than MIDI files.

Compressed formats In order to distribute audio contents through the Internet, it is necessary to use file formats which are able to guarantee a degree of quality similar to sampled audio files, but requiring a much smaller memory size. To do that, compressed file formats have been introduced. There exist basically two compression strategies: - lossless compression: original information is totally preserved, so we can reconstruct every single bit of the initial file; - lossy compression: information that is not intelligible to the human hear or is hardly perceived are discarded. There exist two classes of lossy codecs: - Constant Bit Rate (CBR): each time unit is assigned the same number of bits; - Variable Bit Rate (VBR): segments of sound that are more complex than other are assigned a larger number of bits. When compressing a file, codecs allow the user to set the desired bit rate. Compressed Audio Formats Mp3 (Mpeg-1 Audio Layer 3) is the most common file format; it is based on lossy compression with selectable bit rate, varying in the range 32-320kbit/s and CBR or VBR coding. It is not a new compression technology, but it ensures a degree of compatibility with readers larger than newer formats. Aac (Advanced Audio coding) is a part of the Mpeg-4 standard, part 3. It ensures a high quality level: at a 128 kbit/s bit rate it is close the quality level of a CD. It is supported by several encoders, with different rendering results. The most common extensions are M4a (Mpeg-4 Audio) and mp4. Mp3pro is an evolution of Mp3, which however has not been largely adopted. It requires half of the space than the Mp3 format ensuring the same quality, and can be played by a normal Mp3 player, with lower quality. Ogg Vobis is an open and patent-free format (unlike the Mp3 format). The basic principle is similar to Mp3 but largely improved and ensuring a good quality level even for low bit rates. Wma (Windows Media Audio) is the alternative by Microsoft for Mp3 format. It uses .wma o .asf extensions. While ensuring a better quality than Mp3, it is inferior to Aac e Ogg.

Formats and standards for images: classification A basic classification of image formats distinguishes two large families: raster and vector images. In raster formats, images are described as a matrix of dots, called pixels (picture elements). The number of bits associated to each pixel defines the maximum number of different colors the pixel can take on. We can enlarge an image by replicating each pixel, which results in a so-called pixelation. A vector image, on the other hand, is described my means of a set of mathematical objects (lines, curves, etc.), so that the quality is unchanged independent of the scaling of the image. Unfortunately, not all images can be described with vector formats. Vector images are usually smaller than raster images and are well suited for geometrical images (for example, images processed by Computer Aided Design, CAD software).

Figure 1: The effect of scaling a raster image

Formats and standards for images: the main formats The main formats for raster images include BMP (BitMaP), GIF (Graphics Interchange Format), JPEG (Joint Photographic Experts Group), TIFF (Tag Image File Format). For vector images, the most commonly adopted formats are EPS (Embedded PostScript), CDW (Corel DraW), WMF (Windows Media File). BMP images represent the subject as a matrix of dots. Each dot is assigned a number of bits ranging from 24 to 32. As a consequence, BMP images ensure a high level of quality, but require a large amount of memory. GIF format uses a lossless compression format. It supports transparency, interleaving and animations. An interleaved image, in particular, can be represented drawing first all even rows and then odd ones. This enables displaying a preview of the image in around half the time required for the full image. GIF images support 256 colors, so they are not suitable for photographs but only for small images (such as icons) or simple animations. JPEG images ensure a lower file size but use a lossy compression algorithm. TIFF format is often used for image acquisition (e.g., from a scanner) and supports different compression algorithms and color range (i.e., the way colors are represented in a given format).

Video and standards for images The main video formats and standards include: - Audio Video Interleaved, AVI (.avi): it is a format introduced by Microsoft for Windows, and supported by many codecs (e.g., DivX, M-Jpeg). It is suited for editing works, due to its compatibility with almost any editing software. - Apple QuickTime (.mov): it is the video format by Apple. Suitable codecs are also available for Windows-Intel. platforms. - Motion Picture Expert Group, MPEG: it is an audio/video compression standard, comprising several variants. MPEG-1 is used for the Internet; file names use the .mpg extension. A MPEG-1 movie can be stored in two CDs. MPEG-2 improves video quality, but requires more space: a movie requires a DVD (in fact, it is the video format used for DVD and for digital television). MPEG-4 compresses data so that they fit in a CD. Currently, video DVDs and digital television use MPEG-2, that offers a good video quality level and a reasonable degree of compression. A one-hour movie (that would require 12 GB in DV-AVI, the format used in digital videocameras), takes 2 GB in MPEG-2. - Windows Media Video, WMV (.wmv, .wma): is an attempt by Microsoft to replace previous formats. It is based on the MPEG4 codec. MPEG-2 format stores the video stream in blocks known as GOP (Group of pictures). Each GOP lasts around half second and it is made of 15 frames. The standard defines three types of frames: I, B, and P. Each group starts with am I-Frame (Intra-frame), which contains a complete image. P-frames (predictive frames) are derived by interpolation from the differences with an I-Frame. Finally, B-frames (Bidirectional predictive frames) are derived from both the previous and the next frame, and are processed with the maximum degree of compression.

C.5.1.4. IP telephony and the requirements of VoIPEdit

VoIP (Voice over Internet Protocol) refers to protocols and technologies allowing voice traffic to be transmitted through the Internet or other similar packet switched networks. It is also known as IP Telephony or Internet telephony. The basic idea behind Voice over IP is to convert sound signals into digital data and encapsulate such information as data packets to be sent over IP. Normally, audio digital data are compressed by reducing their data rate, addressing obvious bandwidth limitations. Using a single network to carry voice and data has the big advantage of enabling cost savings, especially when some underutilized network capacity can be used to carry VoIP traffic at no additional cost. Since they only require the use of the underlying Internet infrastructure, calls between two VoIP clients are often free. Calls involving switched telephone networks, such as PSTN, are instead normally charged as normal calls by telephony operators. Numbering in VoIP infrastructures can be of either Direct Inward Dialing (DID) or access numbers type. In the former case, the caller is directly connected to the VoIP user. Access numbers, on the other hand, require the caller to manually input the extension number associated to the VoIP user. Based on a digital infrastructure, VoIP enables services and functions that are not straightforward in traditional PSTN telephony systems. Examples include: - Usual Internet services, such as instant messaging, emails, file exchange, can be integrated with the mobile telephony service is a seamlessly way. - VoIP ensures location independency, i.e. the user can connect to the telephony network from everywhere, provided that an Internet connection is available. - A single Internet connection can allow multiple calls, provided that a sufficient bandwidth is available. - VoIP software can easily support conference calls with more callers, automatic redial, notification of caller ID, and many “value added services”. - Cryptographic algorithms can be used for securing calls.

C.5.2 Multimedia and Mobile Computing ComponentsEdit

C.5.2.1. Electronic devices for mobilityEdit

Consumer electronics is pushing digital technology into everyday life. Digital devices are becoming pervasive. They can take on disparate forms and offer various functionality. Mobile phones, smartphones, Personal Digital Assistants (PDAs), audio/video players, and Global Positioning System (GPS) navigation systems are some of the most noticeable examples of this explosion. Although there is no industry standard definition of a smartphone, this term usually refers to mobile phones offering complex functionalities, close to that of standard personal computers, and far beyond of traditional mobile phones. Typically, smartphones run operating systems whose characteristics are similar to those of PC operating systems. In particular, we can normally install new applications, such as mobile games, word processors, email clients, instant messaging software, etc., not necessarily developed by the handset manufacturer, but by third party software developers, as well. Advanced smartphones normally provide a small alphabetical keyboard, a touch screen, a built-in camera, and sometimes GPS hardware and software for navigation application, applications to process documents in a variety of formats, such as PDF and Microsoft Office, media software for playing music and other digital contents, browsing photos and viewing video clips, Internet browsers, etc. Slightly more complex than smartphones, Personal Digital Assistants are true handheld computer, sometimes referred to as palmtop computers. PDAs run a regular operating system, which provides fully-fledged capabilities, such as application installation and management, Internet access, e-mail, video recording, word processing, media playing, game, and GPS. Often, PDAs also have GSM functionality, acting as regular smartphones. They typically feature color screens, touch screen technologies, and audio capabilities. Many PDAs support Wi-Fi or Wireless Wide-Area Networks (W-WANs) for wireless Internet access. The Global Positioning System (GPS) is the most known and widely deployed Global Navigation Satellite System (GNSS). Other systems include the Russian GLONASS, which however is not completely functional at present, the upcoming European Galileo positioning system, a navigation system proposed by China, called COMPASS, and the IRNSS system proposed by India. GPS, whose official name is NAVSTAR GPS, was developed by the United States for military purposes but has been converted to civil uses around twenty years ago. It is based on a constellation of 24 Medium Earth Orbit satellites, transmitting microwave signals from different positions and directions, allowing a GPS receiver to process satellite signals and use them to determine its location with respect to satellites, its speed, and its direction. The expression wearable computing refers to computers that can be worn on the body. In a broad sense, wearable computing refer to all unconventional computers that can be used in symbiosis with the user, such as health monitoring systems, but also entertainment or multimedia processing. In particular, applications requiring digital processing but need the user's hands, eyes, etc. be left free to interact with the physical environment. Not surprisingly, wearable computers are also used for military applications. A typical feature of wearable computers is that they are not turned off or on by the user, but are continuously working.

C.5.2.2. Main multimedia I/O devicesEdit

The keyboard is made of a set of keys corresponding to a letter or a symbol. By pressing a key, the device sends a numerical value to the computer system corresponding to the key pressed. Modern keyboards are based on wireless connections (i.e., they are cordless), which requires autonomous power supply and hence the use of batteries. The so called QWERTY keyboards are the most widespread type of keyboards. They are named after the letters of the first six keys in the upper alphabetical line. There are different keyboards corresponding to different languages (e.g., the Italian keyboard of the Macintosh has a QZERTY configuration). Mice are pointing devices, i.e., devices which translate user’s physical movements into movements of the pointer on the screen. In order to detect user’s movements, first generation mice adopted an electro-mechanical solution: by rolling on a flat surface, a metal ball covered with rubber interacted with internal sensors which translated the movement into an electrical signal sent to the system. In order to work properly, this solution required perfectly clean and rough surfaces. During the last decade, electro-mechanical solutions were replaced by optical solutions: the internal surface of the mouse is provided with a red LED and a light sensor. The LED emits a light beam which is reflected by the desk surface and read by the sensor, thereby detecting the direction of the movement. To improve performance, recent devices use laser light instead of red LEDs. Like keyboards, mice are often provided with radio links, eliminating the need for wires. A monitor is a device which receives signals and displays them as images or text. Images on a monitor are formed by composing the so called pixel (Picture Element). Each pixel has three color channels associated (Red Green Blue, RGB model) whose composition makes up the point displayed. Typical parameters of a monitor are size, resolution and refresh frequency. The size is expressed by measuring its diagonal in inches. The resolution is the number of pixels forming an image; it is given as the number of pixel in a horizontal line times the number of pixel in a vertical line (e.g., 1024x768). The higher the resolution, the more detailed the image displayed. By refresh frequency we mean the number of image updates per second; it is measured in Hertz. The human eye may perceive vibrations at low refresh frequency. TFT (Thin Film Transistor) technology is today used to produce flat screen monitors, digital camera displays, and laptop screens. It is based on a matrix of dots: each dot has a dedicated electronic device that can turn it on or off. In contrast to traditional monitors, TFT technology provides a more stable image at lower refresh frequency (60 Hz vs. 85 Hz required in traditional monitors), clearer displaying and a better contrast. Additionally, it is more immune to interferences and to reflection problems. TFT technology has also some disadvantages, such as the fact that image quality degrades as the viewing able increases. The most common printing device is the printer, that allows transferring text and graphical images to paper supports (and possibly other types of supports, such as cloth, transparencies, etc). There exist several types of printers: - Ink-jet printers: are based on the emission of small ink drops to the paper; - Laser printers: coloring is achieved by means of a toner. The ink emitted by toner is fixed to the support by thermal effects; - Dot printers: as in old typewriters, printing is accomplished by small needles hitting an ink tape placed between the needles and the support. When colors are obtained by mixing four base colors (Ciano, Magenta, Yellow, blacK, CMYK model), we use the expression “quadricromy”. In graphical printing we often use hexacromy. The quality of printing is measured in DPI (Dot Per Inch), i.e., the number of dots printed along a line one-inch long. Other typical parameters include the number of page printed per minute (ppm) and the time it takes to print the first page. Graphical input devices enable the acquisition of images or text, translating them into digital formats. The main of these devices is the scanner. The most widespread type of scanner is flat scanner. The original document to scan is placed on a glass plane, below which a head slides along with a set of sensors (Charged Coupled Device, CCD) and a lamp. Special scanners are used for acquisition of negatives and transparencies. For scanners, resolution is measured in SPI (Samples Per Inch). It represents the capability of the CCD sensor to detect the light reflected by the dots of an image in two directions. A quality figure for a scanner is its CCD linear horizontal resolution (also called optical resolution), which should not be confused with output (or interpolation) resolution. As an example, a 600 SPI linear resolution scanner may reach an interpolation resolution of 4800 PPI (Point Per Pinch). A microphone is a transducer used to transform voice into an electrical signal. On the contrary, speakers transform an electrical signal into the frequency of vibration of a membrane producing sound waves. The power of speakers is measured in RMS (Root Mean Square) and is expressed in Watt, and may be referred to a single speaker or to a whole diffusion system. Traditional speaker systems are connected to the line-out of an audio board. Modern digital systems are connected through a digital line (coaxial or optical). The number of speakers in an audio system is indicated as a number, followed by a dot and possibly a ‘1’ indicating the presence of a subwoofer (a special speaker used for low tones). As an example, a 2.1 system is comprised of two diffusers (called satellites) and one subwoofer.

C.5.2.3. Main multimedia storage standardsEdit

In the multimedia realm, magnetic tapes, such as videotapes, were largely used in the past, due to the large amount of information to be stored. Videocameras, in particular, used to store images in an analogical format (VHS supports). Special peripherals were used to acquire the analog video stream, converting it to digital data and then compressing such data. New-generation video cameras store video data in DV format, which ensures a native compression level of 5:1, even when stored on an analogical support (mini DV supports). Acquisition of video data does not need any special device, but can just take place through a Firewire connection. Of course, a suitable software program is needed to store the video, for example in AVI format. Today, optical storage technologies, such as DVD, have reduced the presence of magnetic tapes. Optical supports can be basically classified in two types: CD (Compact Disks) and DVD (Digital Versatile Disk). The structure is essentially the same for the two supports, but DVDs are characterized by higher densities and thus larger capacity. An optical support is formed by a polycarbonate layer and a reflecting metal layer covered with a plastic protective film. The polycarbonate layer has a spiral trace containing “pits”. Areas between pits are called “lands”. The succession of pits and lands determines the information stored in the support. Data are sensed by a beam emitted by a laser device (laser diode), whose light is reflected and detected by a suitable sensor. The access speed of an optical support is not comparable to that of a modern hard disk due to lower rotation speed and larger seek time caused by heavier heads required by optical devices. Many digital cameras use flash memories, while DVD recorders, equipped with internal hard disk, are increasingly replacing old-fashioned videorecorders. As well, videocameras are witnessing the introduction of mini DVDs, replacing magnetic tapes. Multimedia applications are pushing the HD market towards storage devices with increasingly larger sizes.

C.5.3 Principles of Wireless CommunicationEdit

C.5.3.1. Wireless technologiesEdit

A communication system between a source of digital information and one or more destinations consists of a transmitter, a communication channel, and a receiver, as depicted in the following figure. The channel is the physical means traversed by the signals that convey the information.

Figure 2: Building blocks of a digital communication system.

In wireless communications, signals have an electromagnetic nature, and propagate through the ether; this kind of communications does not make use of a solid means, differently from transmissions in typical guided means such as telephonic wires, coaxial cables or optical fibres. The communication through the ether takes place thanks to transmitting and receiving antennas, that convert guided electromagnetic energy in propagated energy and vice versa. Modulation The transmitter is in charge of adapting the input digital signal to the transmission over the channel, that uses electromagnetic signals. The adaptation is based on modulation. This is the process of combining the useful signal (i.e., the signal containing the information) with a signal (carrier) that conveys it over the channel. The reverse operation is named demodulation: it takes place at the receiver side, to extract the useful signal from the received signal. The process of modulation consists of employing the digital information signal to change one or more parameters (amplitude, frequency, phase) of the sine carrier signal transmitted over the channel. The three basic modulation techniques change one of the above parameters: hence they are named Amplitude Modulation (AM), Frequency Modulation (FM), and Phase Modulation (PM), respectively. One further relevant modulation technique is Spread Spectrum. It uses a variable-frequency transmitter to spread the energy of the useful signal in the frequency domain, over a band larger than that of the signal being modulated. The receiver correlates the signals received over the variable frequencies to retrieve the useful signal. The goals of Spread Spectrum are to increase the signal/noise ratio, to decrease interferences, and to establish secure communications. In wireless communications, the frequencies of carrier signals belong to the radio, microwave and infrared regions of the electromagnetic spectrum (Table 1).

Table 1: the electromagnetic spectrum (radio, microwave and infrared regions) Region Frequency range Wave length Radio < 3 GHz > 10 cm Microwave ~3 GHz – 300 GHz ~10 cm – 1 mm Infrared 300 GHz – 428 THz 1 mm – 700 nm

Technologies The main technologies used in wireless networks are based on: - infrared light; - microwaves; - radio waves. Infrared technologies operate in the region of the electromagnetic spectrum with wavelengths between 1 mm and 700 nm, and they use amplitude modulation. They enable communications in direct and spread mode. In direct mode, they are used for directional transmissions; actually they require an unobstructed line-of-sight (LOS). In outdoor applications, direct infrared light is used for point-to-point connections, e.g. between buildings in LOS. In indoor applications, direct infrared connections can be set up within a room, since infrared light cannot pass across walls, e.g. to connect two personal devices. Spread or omni directional mode is used to realize point-multipoint and broadcast connections, e.g. by exploiting reflecting surfaces that bounce off signals in every direction. Optical signals can thus be received by possibly many devices in range. Radio waves technologies operate in the region of the electromagnetic spectrum with wavelengths greater than 10 cm. Typically, they allow lower transmissions rates than infrared technology. They are used to set up personal, local, as well as geographic networks. For indoor networks, they have a typical range of hundreds of metres; the outdoor range reaches a few kilometres for local networks, and tens of kilometres for geographic networks. Radio waves technologies use many modulation techniques, including AM, FM, PM and Spread Spectrum. A significant advantage lies in the possibility of changing carrier frequency for better performance and security, or to set up several networks in a given area. Microwave technologies operate at frequencies that range roughly from 1 to 300 GHz (wavelengths between 30 cm and 1 mm). They are typically used in point-to-point and point-multipoint local or long-range communications. A typical example of microwaves long-range communications is in satellite systems, since microwaves can pass through the ionosphere, differently from longer radio waves.

C.5.3.2. Wireless standardsEdit

Taxonomy of wireless networks A common categorization of wireless networks for digital data communications is made according to their coverage area. Actually, there exist (Figure 3): - Wireless Personal Area Networks or W-PANs (or simply PANs); - Wireless Local Area Networks or W-LANs; - Wireless Metropolitan Area Networks or W-MANs; - Wireless Wide Area Networks or W-WANs; - Wireless Global Networks or W-GNs.

Figure 3: Taxonomy of wireless networks, from Personal Area Networks to Global Networks. The characteristics of these categories (range, related technology standards, typical bandwidth, typical applications) are summarized in Table 2. Table 2: types of wireless networks Type Range Standard Typical bandwidth Examples of applications W-PAN (or PAN) Tens of meters Bluetooth, IrDA Up to 2 Mbps Replacement of wires for portable devices. W-LAN Building or campus IEEE 802.11, HIPERLAN Up to 54 Mbps Extension of wired LANs W-MAN Town IEEE 802.16, HIPERMAN Up to 100 Mbps Wireless connection among buildings W-WAN Geographic GPRS, UMTS Up to 2 Mbps Mobile videotelephony. Mobile television. Mobile Internet. W-GN Planet --- Up to100 Gbps Ubiquitous access to the Internet

Wireless Personal Area Networks have a short range of coverage, limited to that of a human being or a group of people, such as a room or a building hall. The most popular standards for PANs are Bluetooth and IrDA. Both are suited for direct data communication between end-user portable devices in a PAN, such as PDAs, mobile phones, media players, laptop computers and printers. Bluetooth uses a frequency of 2.4 GHz, within a range up to a few tens of meters, and up to a speed of 2 Mbps. The IrDA (Infrared Data Association) standard is based on infrared light, with a range of about 1 m and a data transmission speed of up to 4 Mbps. Wireless Local Area Networks are suited to interconnect user devices within a range considerably higher than that of W-PANs. W-LANs are often extensions of wired local networks, and they have similar performance. The well-known 802.11 standards suite for W-LANs is defined by IEEE, the worldwide Institute of Electrical and Electronics Engineers: it uses transmission frequencies between 2.4 GHz and 5 GHz, with a data communication rate between 11 Mbps and 54 Mbps. The main characteristics of IEEE 802.11 protocols for wireless networks are summarized in Table 3. Table 3: Protocols for wireless networks Protocol Release Date Op. Frequency Data Rate (max) Range (indoor) Range (outdoor) 802.11a 1999 5 GHz 54 Mbps ~35 m ~120 m 8202.11b 1999 2.4 GHz 11 Mbps ~38 m ~140 m 802.11g 2003 2.4 GHz 54 Mbps ~38 m ~140 m 802.11n ~ June 2009 2.4 GHz

  5 GHz	248 Mbps	~70 m 	~250 m

802.11y ~ June 2008 3.7 GHz 54 Mbps ~50 m ~5,000 m

The HIPERLAN (HIgh PERformance Radio LAN) standard has been developed by ETSI, the European Telecommunications Standards Institute: it uses a transmission frequency at 5 GHz, with a maximum speed of 54 Mbps. Wireless Metropolitan Area Networks have an operating range of several square kilometres. Their common application is the interconnection of remote buildings or LANs. The emerging standard is IEEE 802.16, that has a nominal maximum transmission rate of 72 Mbps at a distance of several kilometres. Wireless Wide Area Networks are geographic-scale wireless networks, often used for interconnecting networks of different operators. Common standards are GPRS and UMTS. GPRS (General Packet Radio Service) is used in the second generation (2G) of mobile telephony networks; it has a nominal maximum speed of 171 kbps. UMTS (Universal Mobile Telecommunication System) is used in the third generation (3G) of mobile telephony; it has a nominal maximum speed of 2 Mbps within buildings, 384 kbps in a urban deployment, and 144 kbps in rural areas.

C.5.3.3. Limitations and issues associated with wireless computingEdit

Wireless and mobile technologies have a number of limitations and issues associated, that have to be taken into account in the design and deployment of networks and services. Wireless networks may be subject limitations imposed by national regulations ton frequencies and/or on the transmission power of antennas. In many countries the range of frequencies for most local wireless can be freely used, provided they are deployed in private properties and for transmission powers not exceeding a pre-defined threshold. In public properties, the usage of those frequencies is subject to governmental authorization, and installations are required not to cause interferences to other services. Clearly, all installations have to guarantee people and environmental health. Public operators have also to guarantee the identification of their users. The maximum operating distance for wireless devices to communicate effectively depends on several factors, such as transmission power and propagation paths. The most evident limitations are due to the presence of physical obstacles; examples are walls in indoor environments, and buildings outdoor. These affect the energy propagation of radio waves and thus the so-called network coverage. Less evident limitations, that can however significantly affect performance, are due to electromagnetic interferences with other kinds of equipments operating in the same range of frequencies. Infrared transmissions are not subject to license, generally. However, network performance strongly depends on weather phenomena and on interferences of other light sources, such as the sun or fluorescent lights. Radio technologies are subject to interferences. Signals transmitted by devices that work at the same carrier frequency can collide (e.g. microwave ovens and cordless phones). This is a significant shortcoming, especially for devices working in unlicensed frequency bands. Of course, interferences impact on performance due to retransmissions. Obstacles and interferences may affect the performance of a wireless network, making it unusable for some applications, in particular for continuous and/or synchronous multimedia applications. A W-LAN operating in optimal conditions guarantees a performance level which is sufficient for most applications, such as electronic mailing, file transfer, Internet access. Currently, the maximum throughput of a standard W-LAN guarantees enough performance even to advanced multimedia applications, such as conferencing.

C.5.4 Wireless Networks and ProtocolsEdit

C.5.4.1. W-LAN generic architectureEdit

A typical configuration of a wireless local network (W-LAN), called infrastructure-based W-LAN, is composed of a wireless infrastructure that interconnects end-user devices (Figure 4). The infrastructure is composed of base stations, called access points, that may in turn be linked to a distribution system, such as an Ethernet wired network. Users access network services through devices equipped with a wireless network interface card (NIC), that connects them to the infrastructure. Base stations can support roaming, i.e. they can provide network connection even when the device moves from the coverage area of a given access point to a different one. This is similar to roaming in cellular telephony networks. Figure 4: Architecture of an infrastructure-based wireless system.

An alternative configuration of a W-LAN is often used, without access points. This is used to directly interconnect user devices, without the need for a real infrustructure. Such a configuration is called ad hoc network.

C.5.4.2. Interoperability and compatibility issuesEdit

The IEEE 802.11a and IEEE 802.11b/g standards have not been designed to be interoperable. Hence, equipments (network cards and access points) implementing different standards cannot be interconnected. However, multimodal network cards, i.e. cards that implements more than one standard, are commercially available, that overcome this limitation and allow compatibility. The major differences between IEEE 802.11b, a and g are: • IEEE 802.11b operates at a frequency of 2,4 GHz, with a band amplitude of 83 MHz. It has a 11Mbit/s transmission speed and a 90 meters indoor coverage. This standard uses the DSSS (Direct Sequence Spread Spectrum) modulation technique. An IEEE 802.11b compliant device provides 3 non overlapping channels for signal transmission • IEEE 802.11a operates at a frequency of 5 GHz, with 150 MHz band amplitude and a transmission speed up to 54 Mbps. It covers up to 30 meters and it uses the OFDM (Orthogonal Frequency Division Multiplex) modulation technique. An IEEE 802.11a compliant device provides 12 non overlapping channels for signal transmission; thus, an IEEE 802.11a device is able to support more users than a 802.11b one. • IEEE 802.11g introduces a further modulation technique. It works at the same frequency of the 802.11b, at a transmission speed up to 54 Mbps. IEEE 802.11g specifications assure backward compatibility with the 802.11b. Applications with high quality-of-service requirements are better supported by IEEE 802.1a, since it has a transmission speed up to 54 Mbps against 11Mbps assured by IEEE 802.11b. Furthermore, the 5 GHz band is less crowded than 2,4 GHz band; hence, IEEE 802.11a networks are not subject to inferences as much as IEEE 802.11b networks, and collisions occur rarely. On the other hand, IEEE 802.11a devices cover smaller areas than IEEE 802.11b/g ones; hence a greater number of access points is needed to cover a given area. Devices from different manufacturers could not be able to interoperate. For this reason, the non-profit Wi-Fi Alliance association has been established in 1999: its goals are to verify and assess interoperability among 802.11 devices. Wi-Fi certification testing procedures assures interoperability among equipments. Certified devices are labelled as “Wi-Fi certified”, meaning that they are compatible with other certified devices even coming from different vendors. Some compatibility issues may arise related to security protocols in a W-LAN. In fact, malicious users could intercept data transmitted via ether. For this reason, standard security protocols have been developed. The IEEE 802.11 standard implements the Wired Equivalent Privacy (WEP) protocol. WEP allows to prevent intrusions but it is not able to completely solve security problems; improved security protocols have been introduced in more recent standards and technologies (IEEE 802.11i or WPA2). However, these may not be supported by all wireless devices commercially available.

C.5.4.3. Satellite networksEdit

Introduction In a satellite network, a terrestrial transmitting station sends data to a satellite in orbit, that acts as a repeater, sending data back to Earth to receiver stations – fixed or mobile – that are in its coverage area. The extension of the coverage area of a single satellite, much larger than that of terrestrial systems, makes this technology suited to build global wireless networks. Satellite networks are a classical example of broadcast technologies, and they are characterized by broad band transmissions. More recently, satellite technologies have been used to build networks for high-speed Internet access or to build enterprise networks. They represent also a viable solution to connect geographical areas that would be cost-ineffective to connect at high speed otherwise, through terrestrial links. One such example are mountain communities that are far from urban areas. A further application domain of satellite networks is in the management of large-scale emergency situations, for instance to enable coordination and sharing of information among mobile operators in the geographic area of a natural disaster.

Satellite communications The transmission between stations on Earth and a satellite uses radio frequency communications. The frequencies used range from 1 GHz to 40 GHz. The transmission channel from a terrestrial station to the satellite is called uplink, while the opposite direction is called downlink. A transmitter station on Earth sends to the satellite a signal at a high frequency, called uplink frequency; the satellite then re-transmits the signal to receiving stations, shifting it on a different frequency (downlink frequency), to avoid interferences between the uplink and the downlink signals. The electronic equipment installed on the satellite that converts the uplink signal into the downlink signal is called transponder. The uplink operates at a higher frequency than the downlink. This is because higher frequencies are subject to greater dispersions, and in order to compensate the performance loss, it is necessary to augment the transmission power. Terrestrial stations can use higher power levels than transmitters on satellites. Besides broadcast transmissions, satellite communications support point-to-point and point-multipoint connections (Figure 5) between stations on Earth. Figure 5: A satellite network for a point-multipoint connection.

Architecture of a satellite network system A satellite network consists basically of stations on Earth that communicate through a satellite in orbit in the space. Terrestrial stations are equipped with a satellite antenna (receiving and/or transmitting) and with a satellite network card (satellite modem). On the satellite, the transponder can operate in UNICAST or MULTICAST mode. In the former case, the transmission is intended to a specific terrestrial station; in the latter, it is intended to a non pre-defined number of nodes. In a unidirectional satellite system, data communications from the user station to the network take place over wired links, while data flaws in the opposite direction use the satellite downlink (Figure 6). In this scenario, user stations (e.g., computers) are linked to a common parabolic antenna – the same used in digital satellite television – by means of a proper interface card (SAT modem). Figure 6: A satellite network for unidirectional communications.

Satellite networks can also be used for bidirectional communications and broadband Internet access (Figure 7): in this scenario, a client station on Earth can send data packets through the satellite to a hub connected to an Internet backbone, by means of the shared uplink channel. Data packets from the Internet are sent back to the client station through the hub and the satellite, via its downlink channel. The role of the hub is to adapt radio and Internet data packets. It is worth noting that in this kind of satellite networks there is an asymmetry in downlink and uplink speeds; indeed, the uplink bandwidth required by typical Internet applications is much lower than the downlink bandwidth.

Figure 7: A satellite network for bidirectional communications and Internet access.

Geostationary Earth Orbit (GEO) satellites Geostationary or GEO (Geostationary Earth Orbit) satellites are positioned at an exact height above the earth (about 36,000 Km), normally over the equator; they rotate around the Earth at the same speed as the Earth rotates around its axis, so in effect remaining stationary above a point on the Earth. GEO satellites are the most used satellites. The distance from Earth allows a coverage area of approximately one fourth of the terrestrial surface. Three GEO satellites placed at 120 degrees are sufficient to cover the most populated areas on Earth, excluding the poles. However, the distance of 36,000 Km is responsible for sensible transmission delays and attenuation. Low Earth Orbit (LEO) satellites Low Earth Orbit or LEO satellites have a circular or slightly elliptical orbit at less than 2.000 Km from Earth, with a period from 1.5 to 2 hours. The diameter of the area corresponding to their orbit is about 8,000 Km. Thus, the speed of the satellite with respect to Earth is rather high; this means that the signal is subject to alterations due to the Doppler effect. A global network based on LEO satellites is made up by several satellites. The signal transmitted by a station on Earth may pass through more than one satellite before reaching a very distant destination. LEO satellites present the advantage with respect to GEO satellites of having a lower transmission delay and a lower signal attenuation. Their coverage area is smaller but it can be defined more precisely. On the other hand, in order to cover a very wide geographic area, more LEO than GEO satellites are needed. Medium Earth Orbit (MEO) satellites Medium Earth Orbit or MEO satellites have a circular orbit at a distance from Earth between 5,000 and 12,000 Km, with a period of about 6 hours. Their coverage area has a typical diameter between 10,000 and 15,000 Km. In order to cover a given very wide area on Earth, the number of MEO satellites required is lower than the number of LEO satellites. Clearly, MEO satellites have a transmission delay and an attenuation factor in between LEO and GEO satellites.

C.5.4.4. Protocols for mobile stationsEdit

C. BluetoothEdit

Introduction The Bluetooth standard enables wireless communications for several personal networking applications, such as connection of PDAs or mobile phones to PCs, or wireless headphones connection to mobile phones. Bluetooth has had an incredible impact in the consumer electronics market, which is likely to continue in the next years. Bluetooth devices may communicate in a range from 10 cm to 100 m, depending on signal strength, obstacles and interferences. The maximum data transfer rate is 720 kbps. Piconet A Bluetooth network is called piconet; it is composed of at most eight devices, with one of them configured as master and the others as slaves. The master is the device that starts establishing the piconet; any device may become the master of a piconet by initiating the connection with other (slave) devices. A piconet is governed by the master station, that gives initial time slots, beginning the communication with the slaves; slave devices are enabled to transmit only when prompted by the master. Classes of Bluetooth devices Bluetooth was designed with particular consideration to battery consumption issues in mobile devices; there are three distinct classes of devices: - Power Class 1 devices work with major distances (~100 m); they operate at a maximum power of 20 dBm; - Power Class 2 devices are for medium range applications (~10 m); they operate at a maximum power of 4 dBm; - Power Class 3 devices are for short range applications (~10 cm); they operate at a maximum power of 0 dBm.

Bluetooth protocol stack The Bluetooth standard encompasses a set of layered protocols (Figure 8). The host controller interface is the interface capable of accessing the hardware of the mobile device. This is used to access basic services by upper levels. Figure 8: The Bluetooth protocol stack.

The RF level The physical layer of the Bluetooth standard is the radio level, that operates in the Industrial Scientific Medical (ISM) 2.5 GHz unlicensed band, using the frequency hopping spread spectrum (FHSS) transmission scheme.

The Baseband level The Baseband level controls channels and physical links, offering synchronizations between Bluetooth devices, error correction, and security. The transmission scheme uses time division (TDD), where master and slaves transmit alternatively. Each piconet uses a channel whose temporal axis is divided in frames, in turn divided in time slots of 625 μs; the communication is governed by the master’s clock. The master may send packets to slaves in odd slots, whereas even slots are reserved to the slaves’ transmissions. To avoid collisions, a slave may transmit in a slot only if the previous slot was the one used by the master to communicate with it. Figure 9 shows the time frames and slots in a piconet composed by the master and two slave devices.

Figure 9: A Bluetooth slave device can transmit only if prompted by the master in the previous slot. The greater the number of slaves in a piconet, the longer the time frames. Slaves have to wait their time slot to transmit again.

Connection types There are two connection types between masters and slaves: - Synchronous Connection-Oriented (SCO); - Asynchronous ConnectionLess (ACL). SCO is a synchronous point-to-point connection between the master and a slave where slots are reserved at regular intervals, therefore it can be considered a circuit switching connection. SCO packets are never re-transmitted. SCO is typically used to transmit voice data. ACL connections are asynchronous point-to-multipoint packet switching connections. Re-transmission is provided for ACL packets to ensure data integrity. Communication phases in a piconet Let us illustrate how devices create and abandon a piconet. Figure 10 shows the phases of a successful communication in a piconet.

Figure 10: Phases of communication in a piconet.

The inquiry procedure The inquiry procedure is used to discover Bluetooth devices, their address and their clock. When a unit is ready to be discovered by other devices, it performs an inquiry scan, listening for other stations’ requests. When an inquiry message is received, the unit (and ultimately, the user) can decide to respond. A device willing to discover the stations in range enters the inquiry state and it transmits an inquiry message. This step continues until the station decides that it has gathered sufficient information about devices in range, or after a timeout. The paging procedure After a device has been discovered through the inquiry procedure, a Bluetooth connection can be established. At the RF level, a connection is established when all devices are synchronized with the master’s clock and address. In the upper levels of the protocol stack, a connection is established when an ACL link is active and information can travel using the L2CAP and SDP protocols. The base band procedure used for establishing this kind of link is called paging. When a unit intends to establish a connection, it enters the page state and begins the transmission of its Device Access Code to the intended recipient: at this point the recipient station must be in the page scan state. The station that is originating the paging becomes master and the responding station becomes slave. The slave adds an offset to its clock in order to synchronize with the master. The slave and the master then enter the connection state. Device connection states In the connection state, devices can operate into four distinct modes: • ACTIVE MODE: the unit sends and receives packets. In the master-to-slave slot the unit listens for packets addressed to itself, and it responds in the next slave-to-master slot; • SNIFF MODE: the slave is listening to some master-to-slave slots and remains listening unless it is the recipient of a packet. • HOLD MODE: the slave does not receive ACL packets and may switch to the page scan, inquiry scan, page, or inquiry states; the duration of this mode is agreed between master and slave; • PARK MODE: this is used by slaves that have no need to use the channel but that wish to remain synchronized with the piconet; it is particularly useful for low power consumption.

Link Manager level The communication between the Link Manager (LM) levels in two devices takes place thanks to the LM protocol. This is responsible for the creation and management of the link among units and for the management of the piconet. More in detail, the LM protocol is used for: • Establishment and closure of the connections; • Role exchange (master-slave); • Management of the low power consumption modes (Hold, Sniff, Park).

Logical Link Control and Adaptation Protocol (L2CAP) level The Logical Link Control and Adaptation Protocol level (L2CAP) provides connection-oriented and connectionless services, and it is in charge of interfacing upper levels to the services of the Baseband level (Figure 11).

Figure 11: The L2CAP level.

L2CAP allows to transmit and receive data packets up to 64 KB. It provides a high-level logical channel abstraction to applications running concurrently on the device, sharing the physical Bluetooth channel. To this aim, L2CAP multiplexes data coming from the upper layers in low-level connections. The L2CAP level is in charge of performing data segmentation and assembly. Each L2CAP logical channel has a unique local identifier (Channel IDentifier or CID). Radio Frequency Oriented Emulation of Serial COM Ports on a PC (RFCOMM) The Bluetooth Radio Frequency-oriented emulation of serial COM ports on a PC (RFCOMM) level emulates the standard RS232 serial communication protocol (9 pin RS232) using a L2CAP channel. It is also able to perform the multiplexing in a single connection of data coming from several emulated serial ports. Service Discovery Protocol (SDP) The Service Discovery Protocol (SDP) provides client devices with a method to discover the services that are available on Bluetooth server devices in range, along with their properties. The dynamics of the discovery involves simple SDP request messages sent by the client, and SDP Response messages sent by the server. It is worth pointing out that SDP is a protocol just for the discovery of the services available on devices that are in range, not for accessing them. Hence, after the discovery, a client device has to perform the procedures to set up a connection to the server in order to access a given service. There are two modes for clients to discover Bluetooth services: 1) The client looks for a service characterized by certain attributes, looking out whether a server is able to provide it or not. 2) The client explores services provided by a specific server: this mode is named “service browsing”. In order to make easier the service browsing, the server can organize available services in groups. Scatternet Two or more piconets can coexist in the same area. A group of interconnected piconets is called scatternet. Every piconet has its own master. A device may belong to more piconets in the same scatternet only if it acts as master in only one of them. The node that links two piconets is called bridge. The larger the number of piconets in the same area, the higher the probability of collisions.

C. Mobile IPEdit

The future Internet aims to provide connectivity to mobile users through portable devices, such as notebook computers and mobile phones, regardless of their current position. Mobile IP is an Internet Engineering Task Force (IETF) standard protocol designed to allow mobile users to move from one network to another while maintaining a permanent IP address and a permanent connection.

Home and Foreign domains and agents The Mobile IP specifications assume that each mobile terminal has a Home Domain, within which it is assigned a permanent IP address. The address determines uniquely the home domain of the terminal, just as a telephone number in traditional telephony determines the location (through the area code). The Foreign (or External) Domain corresponds to the network (different from the Home domain) currently visited by the user. When the terminal is connected to its home domain, IP packets are routed by the IP protocol as usually. When the user moves and his/her terminal connects to an external domain, the responsibility to properly route IP packets to the terminal is assigned to two agents, called Home Agent (HA) and Foreign Agent (FA), which are two daemon processes typically executed on static network nodes. A Home Agent keeps track of the terminals belonging to its domain but that can migrate in an external domain. A Foreign Agent is responsible for tracking (external) mobile terminals that migrated to its domain. A typical scenario is depicted in Figure 12. Two domains on a WAN are shown: a local area network, which is the home domain of a mobile terminal, and a second network, representing the external domain to which the terminal will connect.

Figure 12: Home and foreign domains in a Mobile IP scenario.

Agents interactions Mobile IP supports mobility by enabling the routing of IP packets to/from mobile terminals without requiring them to change their (permanent) IP address, independently of their current location. The mechanism is based on the cooperation of the home and foreign agents. When a mobile terminal connects to an external domain, its HA contacts the FA domain in order to inform it of the visiting terminal. After a registration procedure, the packet routing technique is based on the use of IP tunneling. IP packets addressed to the mobile terminal are routed to the HA, which in turn encapsulates them in Mobile IP packets being forwarded to the FA. The encapsulation process consists of inserting a new IP header, preserving the original one, and of tunneling it to the mobile node's FA. The packets are de-capsulated by the FA at the end of the tunnel, by removing the added IP header and delivering it to the mobile node. When acting as sender, a mobile node simply sends packets directly to the destination node through the FA.

Registration procedure When a mobile terminal migrates from its home domain to an external one, the first operation that it accomplishes is to register itself to the Foreign agent of the new domain. This procedure consists of the following steps: 1. The terminal waits for an announcement message broadcasted by the foreign agent. With such a message, the agent announces its presence and its IP address. If this message does not arrive within a certain time, it is the responsibility of the mobile terminal to broadcast a message in order to check for the presence of a foreign agent in the visited domain; 2. The terminal starts the registration by providing the FA with its home agent address and additional information used for secure authentication and access; 3. The FA notifies the HA of the presence of the terminal being registered; 4. The HA checks the security information. If it trusts the received information, it then sends a confirmation message to the FA on the external domain; 5. Upon receiving the confirmation message from the HA, the FA informs the terminal that the registration process has succeeded. Mobile IP provides also a de-registration procedure, for a terminal to announce to be leaving the foreign domain. The de-registration releases network resources.

Registration lease The de-registration procedure is often not performed explicitly by users, who turn off their mobile terminals without worrying about releasing resources. In particular, the tunnel created by the FA will still be active after the disconnection. To overcome this problem, a period of validity, also known as registration lease, is associated to the registration. If not renewed, the lease expires and the terminal is de-registered. In this way, a sort of automatic de-registration is implemented.

C. WAPEdit

The WAP (Wireless Application Protocol) standard has been promoted by the WAP Forum in 1997, with the aim to support interoperability between the Internet and mobile networks, and specifically to enable Internet services and applications from everywhere through mobile devices. Industries have put efforts into the development of micro browsers, enabling the delivery of Internet applications, services, and contents to mobile wireless devices which are characterized by reduced power and limited resources (e.g., they have a small display). Indeed, interoperability between wired and wireless devices is not easy to achieve, due to the significant structural differences between them. Compared to wired terminals such as PCs, wireless devices exhibit: • Less powerful CPUs; • Less memory (ROM and RAM); • Restricted power consumption; • Smaller displays; • Different input devices (e.g., a phone keypad, voice input, etc.). WAP has an architectural scheme that mirrors the layers of the HTTP-based Web protocols (Figure 13). It has been designed considering the ergonomics of mobile devices (low number of buttons and small display), as well as the optimizations that can be made during content delivery to face the low reliability of wireless connections.

Figure 13. Comparison between Web and WAP protocols.

The WAP protocol stack is structured in 5 layers, shown in Figure 14.

Figure 14. The WAP protocol stack.

WAE: Wireless Application Environment WAE is the uppermost layer of the WAP stack; it aims to provide a platform-independent environment, enabling developers to create services and applications that can run on different wireless devices and platforms. Hence, WAE is not actually a protocol; rather it is a set of specifications including a micro browser environment that features support for: • Wireless Markup Language (WML): a “light” version of HTML; • WMLScript: scripting language quite similar to JavaScript; • Wireless Telephony Application (WTA): an application programming interface for the integration of WAP and phone services.

WSP: Wireless Session Protocol WSP provides two kinds of services: • Connection-oriented services working on the WTP layer; • Connectionless services, based on datagrams. WSP provides services for browser applications, namely WSP/B, such as: • HTTP/1.1 semantic and functionalities for persistent connections; • Reliable and unreliable “data push” support.

WTP: Wireless Transaction Protocol The WTP protocol provides a light connection-oriented protocol to be used by small mobile client devices. It features: • three classes of services: 1. unreliable one-way requests (no confirmations of received message); 2. reliable one-way requests (message reception is acknowledged) ; 3. reliable request/response; • retransmission till reception of an ack message; • segmentation and reassembly: messages that exceeds the maximum packet length defined by the transport channel are segmented at the server side and re-assembled at the receiver side; • transaction identification: a transaction is identified by 5 elements: {source port, destination port, source address, destination address, transaction identifier (TID)}; • user confirmation: thanks to this functionality, a sender can request for a confirmation to the receiver for each received packet. If the ack is not received within a timeout, the transaction is aborted. WSP uses this functionalities to state that the received racket has been correctly processed.

WTLS (Wireless Transport Layer Security) and WDP (Wireless Datagram Protocol) WTLS is a WAP Security Layer providing a secure transport layer between a WAP gateway and WAP clients (e.g. a mobile phone). This layer assures data integrity and provides terminal authentication mechanisms. It descends from the existing SSL used by the Internet and it can also be used to perform secure communications among radio devices, through cryptographic algorithms (symmetric and public key based). WDP is the transport layer of the WAP protocol stack; it aims to isolate upper layers from the underlying network. This allows the development of portable and network independent applications. WDP has been designed for interoperability with several other transport channels, including the Short Message Service (SMS).

WML: Wireless Markup Language WML (Wireless Markup Language) is the markup language used to develop WAP pages. It is the equivalent of the HTML language is used to develop Web pages for the Internet. Similarly to HTML pages, WML pages are interpreted by an ad hoc software application that runs on a mobile device, usually referred to as micro-browser. Its functionalities are quite to similar to those of a classic web browser but they are tailored for mobile devices. In the typical scenario of the interconnection of a wireless cellular network to the Internet, a WAP gateway is responsible for translating HTTP-WAP packets translations (Figure 15).

Figure 15. Accessing the Internet from cellular phones through a WAP gateway.

WAP pages WAP pages are typically called decks, since they are composed of several cards. A card represents the navigation unit that the browser displays on the wireless device, whereas the deck is the smallest entity that a WAP device can download from a WAP server. When a WAP client requests a page available on a server, it receives the entire deck (Figure 16). The first page is displayed, and the subsequent user navigation into the deck pages does not require reconnection to the server, since the entire deck has already been downloaded and it is available on the client device. This caching mechanism allows to improve protocol performance. However, the size of decks they are able to receive is constrained by the device available memory.

Figure 16. WML page request.

WML syntax In the following, the basic syntax for the development of a WML page is described. The starting line of a WML page is <?xml version="1.0"?> It indicates the current XML version being used. The deck body is always embraced into two wml tags: <wml> […] </wml> Several cards may be included into these statements. A card is defined by the <card> tag: <card> […] </card> Each card is identified by a page-id. For instance: <card> id="pag1" [……] </card> A special tag, namely the P tag, allows to define a paragraph; it is used to make visible the page text:


Text formatting parameters have to be included in the so called formatting tags: .... emphasized text .... bold emphasized text .... italic text .... bold text .... underlined text

WML cards can do actions that can be defined by including them into the following markers: <do> […] </do>

Four actions can be specified, named TASK ELEMENTS: 1. Go to the subsequent card: <go> </go> 2. Go back to the previous CARD: <prev> </prev> 3. Stop the device: </noop> 4. Update page content: <refresh> </refresh> It is also possible to insert graphical objects (the file extension is “.WBMP” and they are black and white objects) within a WAP page by using the <img/> marker. Eight attributes can be specified for images: alt="": insert a text string that is displayed if the the image is not available; src="": it specifies the source address of the image to be displayed; align="": it specifies image and text alignment (top, middle,bottom); height="": it specifies image height; width="": it specifies image width; vspace="": it specifies vertical space to be left blank around the image; hspace="": it specifies horizontal space to be left blank around the image. Differently from HTML, WML is a case sensitive language. Hence, “FILE.wml” e “File.wml” are interpreted as different file names. On the contrary, there are no differences between WML and HTML with respect to URL and comments () syntax.

7. Links to additional materials: [CDK05] G. Coulouris, J. Dollimore, T. Kindberg, Distributed Systems: Concepts and Design – 4th ed., Addison-Wesley, 2005. [CG06] Hsiao-Hwa Chen and Mohsen Guizani. Next Generation Wireless Systems and Networks. John Wiley & Sons, 2006. [TEK01] S. Tekinay. Next Generation Wireless Networks. Springer, 2001. [IETF1321] Internet Engineering Task Force. RFC 1321 Mobile IP. [IETF2344] Internet Engineering Task Force. RFC 2344 Mobile IP and security. [AN0] C. Andersson. GPRS and 3G Wireless Applications. John Wiley & Sons, 2001.

8. Test questions: Question 1: Which of the following supports use optical technologies? A. CD-ROM and DVD True: for both of them, reading takes place by means of a laser beam reflected on a surface

B. CD-ROM and Floppy disk False: Based on electromagnetic effects

C. CD-ROM, flash memories and DVD False: In the storing of digital information is based on electrical phenomena

D. Floppy disk and DVD

False: Based on magnetic effects

Question 2:A codec…: A. …compresses/decompresses multimedia content True: in particular, compression takes place before storing data, while decompression takes place before playing the content.

B. …defines how information is stored on memory supports False: this is defined by “formats”.

C. …is a physical support for storing multimedia data. False: codec are not physical supports.

D. …is typically implemented in hardware. False: although it can be implemented in hardware, it is typically a software program

Question 3: IEEE 802.11 is a popular standard suite for wireless networks. This standard: A. Defines the Application layer for wireless networks of the ISO-OSI protocol stack. Wrong. IEEE 802.11 a standards suite that defines the physical and the MAC layers of the ISO-OSI protocol stack for building wireless LANs.

B. Defines the physical and the MAC layers of the ISO-OSI protocol stack for building wireless LANs. Right.

C. Encompasses a wireless network architecture which can be of three different categories: IBSS, BSS, DSSS. Wrong. IEEE 802.11 a standards suite that defines the physical and the MAC layers of the ISO-OSI protocol stack for building wireless LANs.

D. does not encompass the presence of multiple access points in a small area. Wrong. IEEE 802.11 a standards suite that defines the physical and the MAC layers of the ISO-OSI protocol stack for building wireless LANs.

Question 4: What is an access point? A. A wireless user device. Wrong. An access point is a devecise that allows connectivity between user devices and wired infrastructure.

B. A device that allows connectivity among wireless user devices. Wrong. An access point is allows connectivity among user devices and between them and a wired infrastructure.

C. A device that allows connectivity among user devices and between them and a wired infrastructure. Right. An access point allows connectivity among user devices and between them and a wired infrastructure.

D. A device that allows connectivity between an user device and the wired network. Wrong. An access point allows connectivity among user devices and between them and a wired infrastructure.

Question 5: In a satellite network: A. The downlink frequency is lower than the uplink frequency. Right: This is because higher frequencies are subject to greater dispersions. Hence, in order to compensate the performance loss, it is necessary to augment the transmission power. Terrestrial stations can use higher power levels than transmitters on satellites.

B. The downlink frequency is higher than the uplink frequency. Wrong. The downlink frequency is lower than the uplink frequency. This is because higher frequencies are subject to greater dispersions. Hence, in order to compensate the performance loss, it is necessary to augment the transmission power. Terrestrial stations can use higher power levels than transmitters on satellites.

C. The uplink and downlink frequencies are equal. Wrong. The downlink frequency is lower than the uplink frequency. This is because higher frequencies are subject to greater dispersions. Hence, in order to compensate the performance loss, it is necessary to augment the transmission power. Terrestrial stations can use higher power levels than transmitters on satellites.

D. A terrestrial station transmits data directly to receiver stations that are in its coverage area. Wrong. A terrestrial station transmits data through a high-frequency uplink channel to a satellite in orbit, that acts as a repeater, re-transmitting data back to Earth to receiver stations that are in its coverage area, through a lower frequency downlink channel.

Question 6: A geostationary satellite…: A. … rotates in a circular orbit around the Earth at about 36,000 Km, with a revolution period of 24 hours. Right. With such a revolution period it remains stationary above a point on the Earth.

B. … rotates in a circular or elliptic orbit around the Earth at a distance less than 2,000 Km, with a revolution period from 1.5 to 2 hours. Wrong. These are the characteristics of a Low Earth Orbit satellite. A geostationary satellite rotates in a circular orbit around the Earth at about 36,000 Km, with a revolution period of 24 hours. In this way it remains stationary above a point on the Earth.

C. … rotates in a circular orbit around the Earth at a distance between 5,000 and 12,000 Km, with a revolution period of about 6 hours. Wrong. These are the characteristics of a Medium Earth Orbit satellite. A geostationary satellite rotates in a circular orbit around the Earth at about 36,000 Km, with a revolution period of 24 hours. In this way it remains stationary above a point on the Earth.

D. … rotates in a circular orbit around the Earth at a distance of 36,000 Km, with a revolution period of about 6 hours. Wrong. A geostationary satellite rotates in a circular orbit around the Earth at about 36,000 Km, but with a revolution period of 24 hours. In this way it remains stationary above a point on the Earth.

Question  7: What is a Bluetooth piconet?

A. A network composed by two or more Bluetooth enabled devices. Right.

B. It is a connection between two Bluetooth devices. Wrong

C. It is a generic connection with a Bluetooth headphone. Wrong.

D. It is a special device which connects a Bluetooth network with a LAN. Wrong.

Question 8: What is a foreign (or external) domain in Mobile IP?

A. It is a network the terminal cannot connect to. Wrong.

B. It represents any external network the terminal can connect to, different from its home domain. Right.

C. It is a network which is characterized by protocols different from those of the home network. Wrong

D. Is a group of terminals, external to the home network. Wrong.

Question 9: What is the Mobile IP registration lease? A. It is the maximum period of time a terminal can be connected to an external network. If this period is not renewed periodically, the terminal will be automatically de-registered form the home domain. Wrong.

B. It is the period of validity of the registration of a terminal at an external domain. When it expires, the terminal is de-registered automatically, and resources allocated to it are released. Right.

C. It is the maximum period in which the terminal can remain connected to the home domain. When this period expires, the terminal is automatically de-registered from the home domain. Wrong.

D. It is the maximum period during which the packet routing from the home to the foreign domain takes place successfully. After that period, the protocol does not guarantee a proper routing of packets. Wrong.

Question 10: What is WAP? A. It is a set of protocols allowing interoperability between Internet and wireless networks. Right!

B. It is a protocol that allows to publish a web site on a mobile device. Wrong. This is not the WAP goal.

C. It is an architecture for making phone calls through the Internet. Wrong. This is not the WAP goal. D. It is an alternative to the HTTP protocol, Wrong. WAP is a set of protocols allowing interoperability between Internet and wireless networks.