Computer Methods and Programs in Biomedicine, 37 (1992) 343-351

343

© 1992 Elsevier Science Publishers B.V. All rights reserved 0169-2607/92/$05.00 COMMET 01290

JPEG compression for PACS Ken'ichiro Kajiwara Department of Medical lnformatics, Kurume UnicersityHospital, Kurume, Japan

In the medical field, especially in diagnostic radiology, there still remains controversy over how and whether or not to compress X-ray images for storage and transmission. The joint Photographic Expert Group (JPEG) standard which has recently been agreed upon is the very attractive technique to archive and to transport images in medical fields. This technique is based on 'lossy' compression of images, which can handle not only X-ray images but also full-colored images, and is suitable for introduction into picture archiving and communicating systems (PACS). The images can be handled after compression as quite small clusters of data. For example, a single 2000 x 2000 x 12 bits chest X-ray image which is an 8 Mbyte image compressed at a 10:1 ratio could retain virtually all the visible quality of the original version, and would take 100 s to transmit at 64 kbits/s using the Integrated Services Digital Network (ISDN, 800 kbytes compressed file, 8 kbytes/s transmission theoretically). An important factor in the design of this technique is that this format relies on no specific hardware or software if using JFIF (JPEG File interchange Format). Soon this algorithm will be able to run on any workstation or on any PC in the world. Highly compressed images may be unsuitable for diagnostic purposes. However, they may be sufficient for reference images which will be needed in clinical fields. Data Compression; JPEG; JFIF; PACS

I. Introduction

1.1. The history of JPEG T h e J P E G is the j o i n t c o m m i t t e e o r g a n i z e d in 1986 by the C o m m i t e e C o n s u l a t i f I n t e r n a t i o n a l T e l e g r a p h i q u e et T e l e p h o n i q u e ( C C I T T ) a n d the I n t e r n a t i o n a l S t a n d a r d s O r g a n i z a t i o n (ISO). T h e m a n d a t e a n d p u r p o s e of the J P E G is to develop a n d to s t a n d a r d i z e an efficient m e t h o d for the digital c o m p r e s s i o n of fully colored p h o t o g r a p h i c images. In a c o m p e t i t i o n a m o n g 12 algorithms, three finalists were selected in J u n e , 1987. O f these three, the A d a p t i v e Discrete Cosine T r a n s f o r m

Correspondence: Ken'ichiro Kajiwara, Department of Medical Informatics, Kurume University Hospital, 67 Asahi-machi, 830 Kurume, Japan.

( A D C T ) was u n a n i m o u s l y selected in early 1988 as b e i n g capable of best r e p r o d u c i n g the picture quality. Since then, a cooperative effort to refine, test a n d d o c u m e n t the D C T - b a s e d m e t h o d has b e e n in progress. T h e final draft ( J P E G - 9 - R - 6 ) was agreed u p o n in May, 1991 a n d this will become the i n t e r n a t i o n a l s t a n d a r d in the n e a r future as I S O CD-10918-1 [1,2].

1.2. Deceloping the JPEG algorithm T h e p r o p o s e d s t a n d a r d m e t h o d for digital c o m p r e s s i o n is d e s i g n e d to b e flexible a n d suitable for a b r o a d r a n g e of applications. T h e selected algorithm is a 'lossy' t e c h n i q u e based on the Discrete Cosine T r a n s f o r m ( D C T ) [3], composed of a u n i f o r m q u a n t i t i z e r a n d followed by e n t r o p y encoding. Started within I S O in the early 1980s, o n e initial focus was on the use of photographic images within videotex systems. It was

344 expected that such systems would eventually employ ISDN lines (64 kbits/s) for transmission. The initial algorithm requirements and the evaluation procedures reflect the early focus. Color images with a resolution of 720 x 576 pixels (CCIR 601 format) were selected as test material (Fig. 1). The compression goal involved reproducing a good image quality of around 1 bit/pixel and progressive build-up, allowing an early recognition of the image at a lower quality. The image quality evaluation was tied to relatively inexpensive monitors.

1.3. JPEG configuration The broad scope and the variety of target products resulted in a three-part JPEG algorithm definition; the baseline system, the extended system, and the special function for lossless encoding. The baseline system is mandatory and every

codec must incorporate this function. The extended system adds features such as sophisticated coding, lossless transmission, and progressive build-up. A special function has since been added to achieve sequential reversible coding. Technical agreement on these specifications was reached in October, 1989, and a draft specification was made publicly available in January, 1990.

2. DCT-based compression JPEG is quite 'lossy', meaning that the image you get out of decompression is not quite identical to what you put in. The algorithm achieves much of its compression by exploiting known limitations of the human eye; notably, the fact that small color details cannot be perceived as well as can small details of light-and-dark. Thus,

Fig. 1. Originalphoto imageof CCIR601 evaluation(720x 576x 8 bit image).

345 J P E G is intended for compression images that must be looked at by the human eye. If machine analysis is intended, then the small errors introduced by J P E G may create a great problem. Compression ratio achieved by lossy methods is much higher than with lossless techniques. Image compression factors depend on the image material of immediate interest and for the lossy case, can be selected at the time of compression by the user in a continuous manner. For CCIR 601-type images, the J P E G algorithm achieves compression ratios of 24 : 1, which yields virtually perfect picture quality. Compression ratios to and around 0.25 bit/pixel can be achieved while maintaining the image quality. The compression, however, is achieved at the cost of computational complexity. Therefore, custom hardware is needed for real-time performance. Currently a large effort is under development involving DSP implementations and custom VLSI chips. To satisfy the functional demands, the baseline method must be able to provide high quality reproduced images for all possible color spaces, maintain high compression ratios and support the input pixel element specifications of between 4 and 16 bits. In addition, such implementation of this standard must be cost-effective for software, DSP implementation and custom VLSI chips. The J P E G Adaptive DCT (ADCT) algorithm can be divided into three different stages: (a) the removal of the data redundancy by means of the DCT; (b) the quantization of the DCT coefficient, by introducing weighting functions optimized for the human visual system; and (c) the encoder minimizing the entropy of the quantized DCT coefficients consisting of a coding model followed by entropy encoding. The entropy encoding is done with a Huffman variable word length encoder. In the first step, the DCT makes an approximation of the image based on fitting the basic patterns to a small, square area of the image. The sum of the basic functions results in an image which is visually equivalent to the original. Fortunately for lossy compression, most of the contributions of high-frequency basic functions can be discarded, since such basic functions have only a small effect on the quality of the final image.

Deleting representation and transmitting these functions then account for a major part of the data reduction. The low-frequency basic functions are preserved, and all the redundancy is eliminated from the image. Although color conversion is a part of the rendundancy removal process, it is not part of the J P E G algorithm. JPEG's goal is to be independent of the color space. J P E G handles colors as separate components, so it can be used to compress data from different color spaces such as RGB, YUV, CIELUV, and CMYQ. However, the best compression results are achieved when the color components are independent (non-correlated), such as YUV, in which most of the information is concentrated in the luminance and less in the chrominance. The RGB components can be converted by linear transformation into YUV components to take advantage of this. Another advantage to using the YUV color space comes from reducing the spatial resolution of the U and V chrominance components. Chrominance does not need to be specified as frequently as does luminance; so every other U element and every other V element are discarded after transformation of the RGB format into the YUV format. As a consequence, a data reduction of 3 : 2 is obtained from transforming RGB into YUV. The conversion in color space is the first step towards compressing the image. The next step is then to treat each color component independently either by the baseline model or by the extended system. 2.1. Baseline model The baseline system provides efficient lossy sequential image compression. It supports 4 color components simultaneously with a maximum number of 8 input bits for each color pixel component. The 4-component system means the encoder can receive up to 4 quantization tables and 2 Huffman tables for both the DC and A C D C T coefficients. There is an option to used predefined (default) Huffman tables. Therefore, the baseline encoder can operate as a 1-pass or 2-pass system. In the 1-pass mode default, pre-de-

346 termined Huffman tables are used, whereas in the 2-pass system Huffman tables are created that are specific for the image to be encoded. The basic data entity is a block of 8 × 8 pixels. However, this block can represent a large subsampled image area, for example, by decimated chrominance signals. The blocks of the different color components are sent interleaved, thereby allowing the decoder to reproduce the decompressed image easily. An encoder may optionally insert various types of 'marker codes' into the data stream. These codes, however, must fall on block boundaries. Marker codes can be used to signal private information to the decoder, or for error correction. Because they are embedded into the data steam, they must have a special reserved code, and every decoder must be able to detect them. 2.2. Extended system The extended system is specified to meet the needs of applications requiring more efficient coding, progressive build-up, or lossless coding. These additional features are not required for all applications; and consequently, to minimize the cost of J P E G baseline compatibility, they are defined only in the extended system. The extended system includes all of the baseline system. Besides the above-mentioned features, it can also handle pixels of greater than 8 bits precision, data-interleave schemes, and color components.

3. What is JFIF?

JFIF is a minimal file format which enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. This minimal format does not include any of the advanced features found in the J P E G extension proposal or any application-specific file format. Nor should it, for the only purpose of this simplified format is to allow the exchange of J P E G compressed images. Although any J P E G procedure is supported by the syntax of the JFIF, it is strongly recommended that the J P E G baseline procedure be used for the purposes of file interchange. This ensures maximum compatibility with all applica-

tions supporting JPEG. The JFIF is entirely compatible with the standard J P E G interchange format; the only additional requirement is the mandatory presence of the JFIF special marker, called the APP0 marker, positioned immediately following the J P E G original marker, called SOI. Note that the standard J P E G interchange format requires (as does JFIF) that all table specifications used in the encoding process be coded in the bitstream prior to their use. This format is compatible across platforms: for example, it does not use any resource forks, which are supported by the Macintosh but not by other PCs or workstations. The color space to be used is YCbCr as defined by CCIR 601 (256 levels). The RGB components calculated by linear conversion from YCbCr shall not be gamma corrected. The APP0 marker provides information which is missing from the J P E G stream: namely, the version number, X and Y pixel density (dots per inch or dots per cm), pixel aspect ratio (derived from X and Y pixel density), and thumbnail image. Additional APP0 marker segments can be used to hold application-specific information which does not affect the decodability or displayability of the file.

4. JPEG now

At the moment there are three implementations of JPEG: • Completely software-driven J P E G which is slow to execute • Software and Digital Signal Processor (DSP) to accelerate execution • Completely h a r d w a r e - i m p l e m e n t e d J P E G achieves ultra-high speed. Benchmark tests of these different implementations are shown in Table 1. According to the results, a decoding/encoding time within a second may be desirable for practical use. So efficient DSP implementation can be acceptable but soft version is not in most medical fields. No test

347 was performed of the chip-implemented versions because there are no JFIF-oriented chips available at present. T h e r e a r e n o w t w o s t r e a m s t o a d a p t J P E G ; (1) to use JPEG as a multi-platformed communicat i o n s m e d i a , a n d (2) t o u s e J P E G as s p e c i f i c a l l y installed hardware. The former, the JFIF-orie n t e d u s e o f J P E G , is a c c e p t a b l e f o r g e n e r a l n o n - t e c h n i c a l p u r p o s e s , w h i l e t h e l a t t e r is a c c e p t -

TABLE 1.1 Benchmark test results of JPEG implemented with same software on different computers and OSs

CPU

CJPEG DJPEG seconds second

OS

SUN4 NeXT IBMPC

Sparc/40M 68040/25M 486/33M 486/25M 486/33M 486/25M FM Towns 386/16M Owait 386/16M 3wait 386/16M Owait 386/16M 3wait FMR-50LT 286/8M SoftPC2.0 (NEXT 40/25M)

SUN OS4.1.1 6.3 NeXT OS2.1 J 10.3 RUN386 11.0 RUN386 15.1 MS-DOS5.0 23.6 MS-DOS5.0 31.8 RUN386 51.4 RUN386 66.5 MS DOS3.1 108.0 MS DOS3.3 141.8 MS DOS3.1 209.4 MS DOS3.3 208.0

9.0 13.4 16.8 21.3 30.3 40.4 73.8 96.8 136.0 176.6 258.0 247.2

Used softwares were C J P E G . E X E (encoder) and DJPEG.EXE (decoder) (sources offered by independent JPEG group). FM Towns and FMR-50LT are trademarks of Fujitsu Corp.

TABLE 1.2 Benchmark test results of JPEG implemented with hardware and software DATA A 59.26 65.91

System configuration

System A (The main system for HIS) SUN 4/470 (Sun Micro. Inc.) x 1 as for File Server equipped with 4.2 Gbytes storage hard disk drives. SUN Sparc/2 station × 4 as Graphic terminals equipped with 8 bits frame memory card 640 Mbytes storage hard disk drives. These five machines networked with Ethernet situated in the computer center.

System B (The experimental mini-PACS using JPEG) SUN Sparc/2 station × 1 equipped with JPEG compression board (Picture Press X, Storm tech.) 24 bits frame memory card main memory: 32 Mbytes 640 Mbytes storage SUN 4/330 x 1 will be equipped with DCT compression board offered by Prof. Huang, UCLA in Feb. '92 main memory: 32 Mbytes 640 Mbytes hard disk storage Machintosh Ilci (Apple Computer Inc.) x 1 main memory: 16 Mbytes 32 bits Video Card JPEG compression board (Picture Press, Storm tech.) 180 Mbytes hard disk 512 Mbytes MO disk MS-DOS machine PC9801 RA (NEC Inc.) × 1 main memory: 640-kbytes with 10 Mbytes EMS memory 24 bits Video Card JPEG compression board (Janus VI, Canopus Corp.) 120 Mbytes hard disk 640Mbytes MO disk. These four machines are networked with Ethernet (10 Mbits/s). Two 'stand alone' MS DOS-compatible computers can also display JPEG images in 15 bits mode and 16 bits mode. System B is installed in the Dept. of Medical Informatics apart from system A.

DATA B

co DSP sine DSP co DSP sine DSP Decompression time(s) 1.58 Display time(s) 7.41

TABLE 2

4.71 8.06

91.66 95.30

Machine used: Macintosh llci with or without Picture Press Acceralator Card. Software used: Picture Press version 2.0 (Storm Technology). DATA A used: 1024x1024x8 compressed into 152 kbytes. DATA B used: 1550x800x8 compressed into 238 kbytes. DATA used were acquired from HDTV (High Definition TV) camera.

a b l e f o r t e c h n i c a l p u r p o s e s s u c h as p r e s s o r d e s k top publishing.

5. System configuration in Kurume University Hospital T h e e x p e r i m e n t a l H o s p i t a l I n f o r m a t i o n Syst e m ( H I S ) u n d e r c o n s t r u c t i o n n o w in o u r f a c u l t y

348 TABLE 3 Software configuration Image Alchemy version 1.5 for Unix (Handmade Software. Inc.) Picture Press X for Unix (Storm Technology) Picture Press 2.0.1 for Macintosh (Storm Tech.) Photo Album 1.0 for Macintosh (Storm Tech.) Impresslt for Macintosh (Radius Corp.) software driven Image Alchemy ver. 1.5 for MS-DOS (Handmade Soft. Inc.) D J P E G / C J P E G for MS-DOS (independent JPEG Group) JANUS series (Canopus Inc.) Several original tools written in C Several tools available as freeware

is shown in Table 2. Communication between the two systems is performed using 3 p h o n e - m o d e m lines. Communication speed is 9600 b a u d / s . The p h o n e - m o d e m lines are also scheduled to be expanded to 25 in order to communicate with any computers in and out of the hospital accessing the HIS. The software to perform the J P E G algorithm are all JFIF-oriented (Table 3).

Software driven Software/hardware driven

Software/hardware driven Software driven

When images are required to be diplayed in 24 bit format (full colored images) or 8 bit format (256 gray-scaled images), then a frame memory board is needed. However, such a frame memory board is not yet in wide use. Typical hardware can display only 4 bits/pixel, or 16 colors (that is 16 gray scales). To display a full-color image in such a poor resolutional system, the computer

Fig. 2. Original film is digitized into 10 bits format. Shows bit stripping of images from left upper to right lower, from 8 bits till 3 bits. There is virtually no visible difference untill 6 bits, just a little darker in lower lined images. A clear difference can be detected between 3 and 4 bits.

349 must map the image into an appropriate set of representative colors. This process is called 'color quantization; (not to be confused with the coefficient quantization done internally by JPEG). Color quantization is obviously a lossy process. It turns out that for most images, the details of the color quantization algorithm have much more impact on the final image quality than do any errors introduced by J P E G (except at the lowest J P E G quality settings). These images can be displayed on any monitor. Differences between different bit depths are shown in Fig. 2.

6. Data input

This system has been implemented to archive all kinds of image occurring in medical fields to store them for reference purposes, rather than for diagnostic purposes. There exists some difficulty in digitizing images of quality at this time. However for archiving endoscopic images or ultrasonographic images, there still remains no direct digitizer even though utilizing the CCD devices. The following several examples illustrate how to digitize medical images. (i) A FV-540 (Canon Corp.) is used to digitize data from ultrasound and endoscopic images. This digitizer is connected to an ultrasound machine (RT-8000 Yokogawa Medical Systems) and Macintosh to digitize images obtained through a Y / C separate video line. It takes about one second to digitize a single image (320 x 240 x 24 bits, 240 kbytes/image). All the digitized images are stored in the computer memory until the end of the examination. After the end of the examination, the images are then transferred from the main memory into the store media (now using 120 Mbytes hard disk or a 512 Mbytes magneto-optic disk). After completion of one day's work, compression sequence is performed. This procedure is programmed to be automatic. It takes about 1 s to compress a single image into 10 kbytes, so about 20 s is needed for a single examination, and thus 5 min for 1 day's work and consuming 3 Mbytes of storage area (20 images per person, and 15 persons per day on average). The proce-

dure is the same for endoscopic images but is performed in another place. Images must be collected on an analog video floppy disk (a 2-inch analog floppy disc can hold up to 50 images, and this media can be handled by the FV540 connected with the ultrasound machine). (ii) Using an analog film digitizer (KFDR-S, Konica Corp.) to digitize the analog X-ray films. The 2000 x 2000 x 10 bit image (8 Mbytes) is digitized within 30 s, stored temporarily and (printed out if necessary using a laser printer), then recalculated into 400 x 400 x 8 bit format (160 kbytes) and compressed into 10 kbytes. This sequence takes about 1 min. (iii) Other materials in photocopy mode can be digitized using a CCD image scanner (GT3000 /4000/6000; Epson Corp.) or laser scanner (Pixel DiO, Canon Corp.). Using these modalities, up to an A3 original image can be digitized. Other material, such as 35 mm slide media, can be digitized by the CCD film digitizer (LS3500; Nikon Inc.) which has the capacity to digitize up to 6000 x 4000 pixels in 8 bit depth per plane. But it requires a long time (nearly an hour), to process such size, it is difficult to incorporate such procedures into routine work. Less than 1000 x 1000 pixels process is acceptable for practical use because the sequence time is about 20 s. A new type of still camera, the digital still camera, is useful to digitize an actual scene only in 0.08 s and can be transferred directly into the computer memory. This type of camera (HC1000, Fuji Photo Film. Inc) is most useful in live photo capturing. Using such modalities, almost all images in medical fields can be captured in digital format.

7. Data archive and communication

As stated before, J P E G can transform a large amount of the data into a tiny cluster. The compression ratio is varied from 1 : 1 to 255 : 1. Therefore the storage capacity and communication speed rely on this ratio. For example, 2000 × 2000 × 10 bits of data reduced into 8 bits cuts the file size to 1/2, so compression of the data lossy

350 at a ratio of 10:1 would reduce the file size to 1/20. This ratio is acceptable by many reserchers [4-9,11,12]. For a compression ratio of 50:1 the file size is reduced to 1/100. Another approach is to reduce the matrix size to 400 x 400 x 8, which is adequate for displaying on most PCs. This at a compression ratio of 10:1 results in reduction to 1/500 (16 kbytes) in size. Using such a tiny image format (400 x 400 x 8, compressed by JPEG), any image can be stored in any media. A single floppy disk can contain more than 50 images. This can in turn be transferred via normal LAN (10 Mbits/s theoretically, 2.4 Mbits/s actually, that is 300 kbytes/s), in only 0.5 s. When sending images via B I T N E T or INTERNET by the popular 2400 b a u d / s modem, the uploading time is estimated to be then only 80 s (200 bytes/s).

8. Discussion

PACS is now faced with many serious problems. These are mainly caused by the size of the

data and the communication speed of the Local Area Network (LAN). A single C T / M R image of 512 x 512 x 8 consists of 260 kbytes, a chest X-ray of 1024 x 1024 x 10, 2 Mbytes, that of 2024 x 2024 x 12, 8 Mbytes. If they are not to be compressed irreversibly, then solving such a problem requires an ultra-high speed LAN and giga-storage media. Other than DCT or JPEG, several techniques are applied to compress images. For the 'lossless' compression, the most popular method is Lempel-Ziv (LZ) or Lempel-Ziv-Welch (LZW) which can compress images to 1 / 4 at most. Another new technique to compress image without loss was reported recently, the Wavelet compression [13], but this new method may not actually be used in the next decade. The history of 'lossy' compression in medical fields is not recent. The first VLSI of DCT was delivered in 1985, and the first medical application of DCT was published in that same year [4]. In this paper, six diagnostic images were used and the results were encouraging. Compression ratios of up to 16 : 1 did not have excessive visual degra-

Fig. 3. The same images as in Fig. 1. The images are shown in various compression ratios.

351

dation. Several reports followed using the DCT based technique namely the full-frame bit-allocation technique [5-8]. MacMahon also concluded that the same ratio was available using chest films for diagnostic use [9]. One report has shown a lower compression ratio of 4 : 1 for MR images of the brain to be acceptable [10]. These results may be due to the poor resolution of the original MR image itself. All images have been digitized using film digitizers. But another approach is to digitize X-ray images directly, computed radiography [11]. By introducing this technique, the time required for the digitizing process is reduced. This also applies 'lossy' compression to archive. Ishigaki [12] has reported such results using CR and DCT, in which the acceptable upper limit of the compression ratio for chest-X ray is 20: 1. X-ray images at various compression ratios are shown in Fig. 3. On the other hand, the demands for archiving pictures occurring in other clinical fields are also great. Using many modalities shown above can digitize almost all kinds of images. These images can also be compressed using JPEG without any hazard. The processing time of JPEG is now just a little slow, but implementation of micro-chips will solve this problem in the near future.

9. Conclusion

The PACS is now faced with great problems. There exist hardly any clues to solve them if lossless handling is persisted with. The JPEG is a completely lossy technique itself, but it can be applied in picture archiving and communication systems in the medial fields. Applying this technique in PACS may produce great advantages, especially in non-diagnostic image handling.

References [1] K.G. Wallace; The JPEG Still Picture Compression Standard, Commun. ACM, 34 (1991) 30-44. [2] ISO/IEC Committee Draft CD 10918-1 dated 1991-1015. [3] N. Ahmed, T. Natarajan and K.R. Rao, Discrete cosine transform, IEEE TRans. Commun. Technol. 25 (1974) 90-93. [4] S.C. Lo, H.K. Huang, Radiological image compression: full-frame bit-allocation technique, Radiology 155 (1985) 811-817. [5] S.C. Lo and H.K. Huang, Compression of radiological images with 512, 1024 and 2048 matrices, Radiology 161 (1986) 519-525. [6] H.K. Huang, S.C. Lo, B.K. Ho and S.L Lou, Radiological image compression using error-free irreversible twodimensional direct-cosine-transform coding techniques, J. Opt. Soc. Am. 4 (1987) 984-992. [7] H.K. Huang, Progress in image processing technology related to radiological sciences: a five-year review, Cornput Methods Progr. Biomed., 25 (1987) 143-156. [8] B.K. Ho, B.S.J. Chao, M.S.P. Zhu and H.K. Huang, Design and Implementation of full-frame, bit-allocation image-compression hardware module, work in progress. Radiology 179 (1991) 563-567. [9] H. MacMahon, D. Doi, S. Sanada, S.M. Monter, M.L. Giger et al., Data compression: effect on diagnostic accuracy in digital chest radiography, Radiology 178 (1991) 175-179. [10] F.A. Howe, Implementation and evaluation of data compression of MR images, Magn. Reson. Imaging 7 (1989) 127-132. [lll K. Asanuma; Technical Trends of the CR System: Chapter 6, Computed Radiography (Springer-Verlag, 1987). [12] T. Ishigaki, S. Sakuma, M. Ikeda, Y. Itoh, M. Suzuki and S. lwai, Clinical evaluation of irreversible image compression: analysis of chest imaging with computed radiography, Radiology 175 (1990) 739-743. [13] S.G. Mallat, A theory for multiresolution signal decomp o s i t i o n - T h e wavelet representation, IEEE Trans. Pattern Anal. Mach. Intell. 11 (1989)674-693.

JPEG compression for PACS.

In the medical field, especially in diagnostic radiology, there still remains controversy over how and whether or not to compress X-ray images for sto...
2MB Sizes 0 Downloads 0 Views