Advance Program:
ECRYPT Summer School on Multimedia Security

Salzburg, Austria, September 21-24, 2005

  21/09/05 22/09/05 23/09/05 24/09/05
9.00-10.30   Robust Watermarking
Ingemar Cox
Biometrics and Multimedia
Jean-Luc Dugelay
Fingerprinting
Nasir Memon
10.30-11.00        
11.00-12.30   Authentication Watermarking
Deepa Kundur
Steganography
Stefan Katzenbeisser
Benchmarking, Implementation Andreas Lang
12.30-14.00   LUNCH BREAK
14.00-15.30 Cryptographic Foundations
Christian Cachin
Media Encryption
Andreas Uhl
Perceptual Hashes
Martin Schmucker
Advanced Watermarking Protocols
Andre Adelsbach
15.30-16.00        
16.00-17.30 Multimedia Technology
Jana Dittmann
Digital Rights Management
Ton Kalker
Attacks against Watermarking Schemes
Deepa Kundur
Steganalysis
Nasir Memon


      Basic Course
      Advanced Course
      Specialized Course


Speakers and Presentations

Andre Adelsbach

Andre Adelsbach studied computer science at Saarland University (Germany) where he focused on security and cryptography. In 2000 he graduated and joined the chair for "Cryptography and Security" of Prof. Dr. Birgit Pfitzmann at Saarland University as a Phd student. During his employment at Saarland University Mr. Adelsbach worked on IST FP5 project MAFTIA, where he worked on the verification and assessment of cryptographic protocols and was a member of the executive board since June 2002. In 2004, Mr. Adelsbach joined the Horst Görtz Institute for IT Security, where he currently finishes his PhD studies on cryptographic copyright protection.

Andre Adelsbach authored several international publications and is reviewer for various conferences and journals. Besides cryptographic copyright protection and digital rights management, his current research interests include security aspects of Voice over IP, security of satellite data services and broadcast encryption. For his work on authorship proof systems Andre Adelsbach received IBM's Best Paper Award "E-Markets: New types of electronic commerce''.

About the presentation of Andre Adelsbach - 'Advanced Watermarking Protocols':

In this tutorial we will discuss zero-knowledge watermark detection and its integration in higher protocols as a secure replacement for symmetric watermarking schemes. We will start with a general definition of zero-knowledge watermark detection. Then we will review existing proposals for zero-knowledge watermark detection and evaluate their quality regarding efficiency and security.

Afterwards we will review non-invertibility constructions for standard symmetric watermarking schemes and discuss how to apply these constructions on zero-knowledge watermark detection protocols. Finally, we will consider protocols for joint generation of concealed watermarks, such that the watermark coefficients are guaranteed to suffice a certain desired distribution, and see applications of these protocols.

Christian Cachin

Christian Cachin graduated in Computer Science from ETH Zürich (1993). From 1992-1993 he worked at ABB Corporate Research on theory and applications of artificial neural networks. From 1993-1997 he was at ETH Zürich performing research in cryptography and information theory and obtained his Ph.D. in Computer Science from ETH Zürich in 1997. From 1997 to 1998 he was postdoctoral researcher at the MIT Laboratory for Computer Science, with Prof. Ron Rivest, one of the inventors of public-key cryptography. He has been a Research Staff Member at IBM Zurich Research Lab since 1998, where he was involved in a number of projects in security and distributed systems.

He has authored more than thirty publications in computer science, holds several patents on secure protocols and cryptographic algorithms, and has been a member of several program committees of technical conferences. He is a Director of the International Association for Cryptologic Research (IACR). Together with Jan Camenisch he was program chair and organized Eurocrypt 2004. His current research interests are cryptography, network security, fault tolerance and distributed systems.

Ingemar Cox

Ingemar Cox is currently Professor and Chair of Telecommunications in the Departments of Electronic Engineering and Computer Science at University College London as well as Director of UCL's Adastral Park Postgraduate Campus. He is currently a holder of a Royal Society Wolfson Fellowship. He received his B.Sc. from University College London and Ph.D. from Oxford University. He was a member of the Technical Staff at AT\&T Bell Labs at Murray Hill from 1984 until 1989 where his research interests were focused on mobile robots. In 1989 he joined NEC Research Institute in Princeton, NJ as a senior research scientist in the computer science division. At NEC, his research shifted to problems in computer vision and he was responsible for creating the computer vision group at NECI. He has worked on problems to do with stereo and motion correspondence and multimedia issues of image database retrieval and watermarking. In 1999, he was awarded the IEEE Signal Processing Society Best Paper Award (Image and Multidimensional Signal Processing Area) for a paper he co-authored on watermarking. From 1997-1999, he served as Chief Technical Officer of Signafy, Inc, a subsiduary of NEC responsible for the commercialization of watermarking. Between 1996 and 1999, he led the design of NEC's watermarking proposal for DVD video disks and later colloborated with IBM in developing the technology behind the joint "Galaxy" proposal supported by Hitachi, IBM, NEC, Pioneer and Sony. In 1999, he returned to NEC Research Institute as a Research Fellow.

He is a senior member of the IEEE, a Fellow of the IEE and a Fellow of the Royal Society for Arts and Manufactures. He is on the editorial board of the Pattern Analysis and Applications Journal and an associate editor of the IEEE Trans. on Information Forensics and Security. He is co-author of a book entitled "Digital Watermarking" and the co-editor of two books, `Autonomous Robots Vehicles' and `Partitioning Data Sets: With Applications to Psychology, Computer Vision and Target Tracking'.

About the presentation of Ingemar Cox - 'Robust Watermarking':

In this tutorial we will look at a variety of issues in the design of robust watermarks. The robustness of a watermark is its ability to survive normal processing. In particular, we will examine robustness to additive noise, valumetric scaling, i.e. changes in brightness or volume, lossy compression, and geometric/synchronization transforms. We will discuss two main approaches. The first relies on the design of detectors that are invariant to signal degradations. The second relies on attempting to invert the degradations prior to detection. Other areas covered, include spread spectrum techniques, quantization index modulation, embedding in perceptually significant regions of the signal, and empirical techniques for determining the robustness of signal components.

Jana Dittmann

Jana Dittmann studied Computer Science and Economy at the Technical University in Darmstadt. In 1999, she received her PhD from the Technical University of Darmstadt. She has been a Full Professor in the field of multimedia and security at the University of Otto-von-Guericke University Magdeburg since September 2002. Jana Dittmann specializes in the field of Multimedia Security. Her research is mainly focused on digital watermarking and content-based digital signatures for data authentication and for copyright protection. She has many national and international publications, is a member of several conference PCs, and organizes workshops and conferences in the field of multimedia and security issues. She was involved in all last six Multimedia and Security Workshops at ACM Multimedia. In 2001, she was a co-chair of the CMS2001 conference that took place in May 2002 in Darmstadt, Germany. She is Associated Editor for the ACM Multimedia Systems Journal and for the IEEE Transactions on Information Forensics and Security. Dr. Dittmann is a member of the ACM, IEEE and GI Informatik.

About the presentation of Jana Dittmann - 'Multimedia Technology':

The aim is to learn about Multimedia Technology with regards to:

  • discuss of the requirements demanded by multimedia systems towards computer systems and the approaches to handle these requirements.
  • introduce production and management of content
  • illustrate the characteristics of and the possibilities provided by multimedia systems.
  • study of aspects of distributed multimedia systems which cover important research and application areas.

  • Content and Goals:

  • Introduction to Multimedia
  • Introduction to general image, video and audio capturing, processing and compression
  • Digital Video Production
  • Jean-Luc Dugelay

    Jean-Luc Dugelay received the Ph.D. degree in Computer Science in 1992 from the University of Rennes. Doctoral research was carried out, from 1989 to 1992, at the France Telecom Research Laboratory in Rennes (formerly CNET - CCETT). He then joined the Institut Eurécom (Sophia Antipolis), where he is currently a Professor in the Department of Multimedia Communications. His research interests are in the area of multimedia signal processing and communications; including security imaging (i.e., watermarking and biometrics), facial image analysis and talking heads. He is an author or coauthor of more then 65 publications that have appeared as journal papers or proceeding articles, 3 book chapters, and 3 international patents. He gave several tutorials on digital watermarking (co-authored with F. Petitcolas from Microsoft Research), biometrics (co-authored with J.-C. Junqua from Panasonic Research) at major conferences. He has been an invited speaker and/or member of the program committee of several scientific conferences and workshops. He was technical co-chair and organizer of the fourth workshop on Multimedia Signal Processing (Cannes, October 2001), and co-organizer of the workshop on Multimodal User Authentication (Santa Barbara, December 2003). His group is involved in several national and European projects related to biometrics. Jean-Luc Dugelay is a senior member of the IEEE Signal Processing Society, and is currently an Associate Editor for the EURASIP Journal on Applied Signal Processing and for the IEEE Transaction on Multimedia.

    About the presentation of Jean-Luc Dugelay - 'Biometrics and Multimedia':

    The security field uses three different types of authentication: something you know, something you have, or something you are - a biometric. Common physical biometrics include fingerprints, hand geometry; and retina, iris, or facial characteristics. Behavioral characters include signature, voice. Ultimately, the technologies could find their strongest role as interwined and complementary pieces of a multifactor authentication system. In the future biometrics is seen playing a key role in enhancing security, residing in smart cards and supporting personalized Web e-commerce services. Personalization through person authentication is also very appealing in the consumer product area. In this lecture, we introduce the field of biometrics, classify the different biometrics and summarize the tradeoffs between accuracy and convenience. We give a special attention to the major biometrics (e.g. fingerprint, voice, face, iris and signature) and emphasize how these biometrics can be combined in practical systems. Finally, we focus on the performance and evaluation of these biometrics, the main application areas and several case studies.

    Ton Kalker

    Ton was born in The Netherlands in 1956. He received his M.S. degree in mathematics in 1979 from the University of Leiden, The Netherlands. From 1979 until 1983, while he was a Ph.D. candidate, he worked as a Research Assistant at the University of Leiden. From 1983 until December 1985 he worked as a lecturer at the Computer Science Department of the Technical University of Delft. In January 1986 he received his Ph.D. degree in Mathematics.

    In December 1985 he joined the Philips Research Laboratories Eindhoven. Until January 1990 he worked in the field of Computer Aided Design. He specialized in (semi) automatic tools for system verification. He was a member of the Processing and Architectures for Content MANagement group (PACMAN) of Philips Research, working on security of multimedia content, with an emphasis on watermarking and fingerprinting for video and audio. In November 1999 he became a part-time professor in the Signal Processing Systems group of Jan Bergmans in the area of 'signal processing methods for data protection'. Currently is at Hewlett-Packet Laboratories in Palo Alto leading a small group on Media Security.

    He is a Fellow of the IEEE for his contributions to practical applications of watermarking, in particular watermarking for DVD-Video copy protection. He co-founded the IEEE Transactions on Information Forensics and Security.

    His other research interests include wavelets, multirate signal processing, motion estimation, psycho physics, digital video compression and medical image processing.

    About the presentation of Ton Kalker - 'Digital Rights Management':

    In this talk we give an overview of the architecture of modern DRM systems. In particular, we discuss license structures, rights expression languages, cryptographic principles, file formats, super distribution and more. We highlight the principles with a few relevant examples. Time permitted, we also discuss some recent effors on DRM interoperability.

    Stefan Katzenbeisser

    Stefan Katzenbeisser received the Diploma in Computer Science in 2001 and the Doctorate in Computer Science in 2004, both from the Vienna University of Technology. He is presently a researcher at the Department for Informatics at the Technical University in Munich. His current research interests include multimedia security, digital rights management systems and the design and validation of cryptographic protocols. He was an editor of the first scientific monograph on information hiding ("Information Hiding Techniques for Steganography and Digital Watermarking", Artech House 2000) and served as a guest editor for the ACM Multimedia Systems Journal and the IEEE Transactions on Signal Processing (supplements on Media Security). In addition, he was program chair of the Information Hiding Workshop 2005 and the IFIP conference Communications and Multimedia Security CMS'05. Currently he manages (together with Jana Dittmann from the University of Magdeburg) a virtual laboratory on watermarking within the European Union project ECRYPT. He is a member of the IEEE, ACM and IACR.

    About the presentation of Stefan Katzenbeisser - 'Steganography':

    In this talk, we give an overview of steganogryphy, the art and science of invisible communication. Starting with an historical overview, we review steganographic algorithms of the computer age. In particular, we discuss steganographic methods for digital images, written text and network traffic. Special emphasis will be laid on the discussion of steganographic security, allowing to quantify the imperceptibility of steganographic communication.

    Deepa Kundur

    Deepa Kundur was born and raised in Toronto, Canada. She received the B.A.Sc., M.A.Sc., and Ph.D. degrees all in Electrical and Computer Engineering in 1993, 1995, and 1999, respectively, from the University of Toronto, Canada. As of January 2003, she joined the Electrical Engineering Department at Texas A&M University, College Station where she is a member of the Wireless Communications Laboratory and holds the position of Assistant Professor. Before joining Texas A&M, she was an Assistant Professor with the Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto where she was an Nortel Institute for Telecommunications Associate and Bell Canada Junior Chair-holder in Multimedia.

    Dr. Kundur's research interests include multimedia security, sensor network security, video cryptography and digital watermarking for digital rights management, data hiding and steganography for computer forensics, nonlinear and adaptive information processing techniques, and hardware implementation aspects of security algorithms. She has given tutorials in the area of information security for digital rights management at ICME-2003 and Globecom-2003, and is a guest editor of the June 2004 Proceedings of the IEEE Special Issue on Enabling Security Technologies for Digital Rights Management.

    About the presentation of Deepa Kundur - 'Authentication Watermarking':

    Authentication watermarking has been proposed an a "soft" alternative to the traditional use of cryptographic message authentication codes (MAC) and digital signatures (DS) to verify the integrity of multimedia information. MACs and DSs indicate with high probability whether any bit-level change has occurred to the message contact. For multimedia applications in which the content may undergo adaptation (such as format conversion) or the delivery network works under principles of "best-effort" service in which errors are unavoidable, authentication watermarking is an attractive solution to authentication and tamper assessment.

    This talk provides an overview of the principles behind authentication watermarking including watermark code design, embedding strategies, and perceptual metrics. A comparison of authentication watermarking (referred to as "embedding-based" authentication) to traditional "label-based" approaches will be provided. Ways in which to incorporate cryptographic security while proving tamper assessment capability are studied.

    About the presentation of Deepa Kundur - 'Attacks Against Watermarking Schemes':

    There has been a great deal of activity in the field of watermarking attacks in the last decade. This talk will survey a number of signal processing and synchronization attacks on watermarking systems. Focus will be placed on emerging collusion attacks for video watermarks.

    Andreas Lang

    Andreas Lang works as a research scientist at Otto-von-Guericke University of Magdeburg in Germany. His main research topics are multimedia and security especially the evaluation of digital audio watermarking robustness and digital watermarking. His current activities are the set up of the StirMark Benchmark test-suite for audio and the evaluation of digital watermark algorithms for various projects. Afore, Mr. Lang worked as researcher at FhG-IPSI in Darmstadt. In December 2000 he graduated from the University Anhalt in Köthen (Germany) with a master of computer science degree in computer science. Andreas Lang is member of the GI Informatik.

    About the presentation of Andreas Lang - 'Benchmarking, Implementation':

    A wide range of watermarking evaluation approaches and especially image benchmarking suites have been described in the literature. We set the main focus on the evaluation of digital audio watermarking with StirMark Benchmark for Audio (SMBA). The architecture of SMBA consists of 4 different types of modules. First, the attack module StirMark for Audio (SMFA), second the read_write stream module to convert audio files into streams and back into files, which is needed for input and output of audio signals. The third module SM-Bell is a wrapper for SMFA and read_write to make it easier to use. The fourth module SM-Bell_GUI is a graphical user interface for SM-Bell. The audio file is read by the read_write_stream module. The audio data are given to the first SMFA process which runs the first attack. The resulting audio signal of this process is the new input for the second SMFA process and so on. At the end of the pipes can be the read_write_stream module to save the audio signal in an audio file. If the user does not want to store the audio signal in a file, then the audio stream can be sent to the sound device to play it.

    Nasir Memon

    Nasir Memon is a Professor in the computer science department at Polytechnic University, New York. Prof. Memon's research interests include Data Compression, Computer and Network Security and Multimedia Communication, Computing and Security. He has published more than 150 articles in journals and conference proceedings and holds 4 patents in image compression and security. He has been the principal investigator on several funded research and education projects sponsored by government agencies like NSF, AFOSR, AFRL as well as private industry such as HP, Intel, Panasonic and Mitsubishi. He was a visiting faculty at Hewlett-Packard Research Labs during the academic year 1997-98. He has won several awards including the NSF CAREER award and the Jacobs Excellence in Education award.

    Prof. Memon was an associate editor for IEEE Transactions on Image Processing from 1999-2002. He is currently an associate editor for the IEEE Transactions on Information Security and Forensics, ACM Multimedia Systems Journal and the Journal of Electronic Imaging. He is was a guest editor for the IEEE Transactions on Signal Processing special issue on Signal Processing for Data Hiding in Digital Media & Secure Content Delivery, for the ACM Multimedia Systems Journal special issue on Multimedia Security, for the Signal Processing Journal special issue on Security of Data Hiding Technologies, and for the European Journal on Applied Signal Processing special issue on Multimedia Security and Rights Management.

    About the presentation of Nasir Memon - 'Fingerprinting':

    Digital fingerprinting is a technology to prevent unauthorized dissemination and use of multimedia content (e.g., image, video, audio). This is achieved by utilizing data hiding techniques to embed a unique identifier to each copy of content to match the identity of the corresponding recipient, before distribution. Later the embedded fingerprint is detected to enforce the rightful use of the content. However, digital fingerprinting techniques are also prone to certain attacks that intend to invalidate the embedded mark. Just as all data hiding systems, robustness of embedding/detection scheme is a core requirement of all fingerprinting techniques. A peculiar design requirement of fingerprinting techniques is the collusion resistance.

    The collusion attack on a digital fingerprint system attempts to generate a new content by colluding with other fingerprinted copies of the same content so that the resulting fingerprint extracted from the new content is not recognizable by the embedder and cannot be used to trace malicious users. Collusion may be in the form of averaging a number of copies of together, by patching random pieces of a number of copies to create a whole, or as a combination of both. Therefore, the design and generation of codes for fingerprinting are of vital importance. For this purpose, orthogonal codes, error-correction codes, and codes based on combinatorial design principles have been proposed. In this tutorial, we will present an overview of these approaches by emphasizing their strengths and weaknesses.

    About the presentation of Nasir Memon - 'Steganalysis':

    Steganography refers to the science of invisible" communication. Unlike cryptography, where the goal is to secure communications from an eavesdropper, steganographic techniques strive to hide the very presence of the message itself from an observer. Although steganography has been used for hundreds or years, recent years have seen a resurgence in steganographic techniques based on digital multi-media objects, especially images.

    In the last few years, we have seen many new and powerful image based steganography and steganalysis techniques reported in the literature. In this tutorial we go over some general concepts and ideas that apply to steganography and steganalysis. We briefly review and discuss the notions of steganographic security and capacity and some of the more recent image steganography techniques. Then we will delve in more depth on steganalysis techniques and highlight the key developments that have taken place and the different contributions that were made.

    Martin Schmucker

    Martin Schmucker has been working with Fraunhofer-IGD since 2000 in the security department. He received his Diploma in computer science from the University of Ulm. During his study his focus was on image processing and computer vision. His diploma thesis was in the area of medical image processing.

    After that he worked in industry in the field of telematics and traffic control systems and implemented components of a client/server-system.

    He has been working at Fraunhofer-IGD on several European and national projects like Wedelmusic (music score watermarking), Certimark (benchmarking of watermarking techniques), MusicNetwork (protection workgroup), and eCrypt (in the watermarking virtual laboratory WAVILAB).

    His current work focuses on symbolic media, particularly on the identification of music scores by watermarking and fingerprinting techniques. Besides identification technology for sheet music he works on fingerprinting techniques for images and videos. His current research interests includes image processing (e.g. image quality degradations in printing and scanning processes), digital watermarking (e.g. reversible image watermarking or image steganalysis), and content protection (e.g. secure content distribution).

    About the presentation of Martin Schmucker - 'Perceptual Hashes':

    In contrast to watermarking techniques, which modify content, perceptual hashing techniques can identify content without prior modifications. Instead of retrieving embedded identifiers, content characteristics are exploited. These content characteristics can range from simple statistical measures to semantic descriptors. Thus, these perceptual hashing techniques can be compared with human fingerprints: Although they "concentrate" content intrinsic features in a unique identifier, the original cannot be recovered from its perceptual hash. Thus, they have an inherent advantage: There is no content degradation and also no preprocessing of the content is necessary.

    During the last years a large research effort was spent on the developement of perceptual hashing techniques. In this tutorial we explain the basic principles of perceptual hashing techniques. Reported techniques for different media types are described and reviewed. The focus is on the feature extraction as well as on the retrieval procedures. After reviewing the basic principles, general evaluation procedures for perceptual hashing techniques are discussed. In addition to that applications for perceptual hashing techniques are presented. Finally, the presentation concludes with open issues in the area of perceptual hashing.

    Andreas Uhl

    Andreas Uhl is an associate professor (tenured in computer science) at the Department of Scientific Computing (Salzburg University) where he heads the Multimedia Signal Processing and Security group.

    He received the B.S. and M.S. degrees (both in Mathematics) from Salzburg and Innsbruck University and he completed his PhD on Applied Mathematics at Salzburg University.

    Andreas Uhl is also part-time lecturer at the Carinthia Tech Institute and has been a guest professor at Linz and Klagenfurt universities recently. His research interests include multimedia signal processing (with emphasis on compression and security issues), parallel and distributed processing, and numbertheoretical methods in numerics.

    About the presentation of Andreas Uhl- 'Media Encryption':

    We will discuss media encryption using the protection of visual data as most important example. First, we will discuss the necessity to develop specific media encryption techniques apart from full encryption with strong cryptographic ciphers. Among the issues covered will be complexity reduction (selective and soft encryption), bitstream compliance (transcoding applications), and different functionalities (transparent encryption, error robustness). The second part will give an overview about techniques proposed in literature so far to achieve the abovementioned goals, ranging from MPEG encryption to more esoteric techniques like chaotic image encryption and we will highlight the strengths and weaknesses of the respective approaches.