What do we mean by “embedded” files in PDF?

What do we mean by “embedded” files in PDF?

The most important new feature of the recently released PDF/A-3 standard is that, unlike PDF/A-2 and PDF/A-1, it allows you to embed any file you like. Whether this is a good thing or not is the subject of some heated on-line discussions. But what do we actually mean by embedded files? As it turns out, the answer to this question isn’t as straightforward as you might think. One of the reasons for this is that in colloquial use we often talk about “embedded files” to describe the inclusion of any “non-text” element in a PDF (e.g. an image, a video or a file attachment). On the other hand, the word “embedded files” in the PDF standards (including PDF/A) refers to something much more specific, which is closely tied to PDF‘s internal structure.

Embedded files and embedded file streams

When the PDF standard mentions “embedded files”, what it really refers to is a specific data structure. PDF has a File Specification Dictionary object, which in its simplest form is a table that contains a reference to some external file. PDF 1.3 extended this, making it possible to embed the contents of referenced files directly within the body of the PDF using Embedded File Streams. They are described in detail in Section 7.11.4 of the PDF Specification (ISO 32000). A File Specification Dictionary that refers to an embedded file can be identified by the presence of an EF entry.

Here’s an example (source: ISO 32000). First, here’s a file specification dictionary:

31 0 obj
<</Type /Filespec /F (mysvg.svg) /EF <</F 32 0 R>> >>
endobj

Note the EF entry, which references another PDF object. This is the actual embedded file stream. Here it is:

32 0 obj
<</Type /EmbeddedFile /Subtype /image#2Fsvg+xml /Length 72>>
stream
…SVG Data…
endstream
endobj

Note that the part between the stream and endstream keywords holds the actual file data, here an SVG image, but this could really be anything!

So, in short, when the PDF standard mentions “embedded files”, this really means Embedded File Streams.

So what about “embedded” images?

Here’s the first source of confusion: if a PDF contains images, we often colloquially call these “embedded”. However, internally they are not represented as Embedded File Streams, but as so-called Image XObjects. (In fact the PDF standard also includes yet another structure called inline images, but let’s forget about those just to avoid making things even more complicated.)

Here’s an example of an Image XObject (again taken from ISO 32000):

10 0 obj
<< /Type /XObject /Subtype /Image /Width 100 /Height 200 /ColorSpace /DeviceGray /BitsPerComponent 8 /Length 2167 /Filter /DCTDecode >>
stream
…Image data…
endstream
endobj

Similar to embedded filestreams, the part between the stream and endstream keywords holds the actual image data. The difference is that only a limited set of pre-defined formats are allowed. These are defined by the Filter entry (see Section 7.4 in ISO 32000) . In the example above, the value of Filter is DCTDecode, which means we are dealing with JPEG encoded image data.

Embedded file streams and file attachments

Going back to embedded file streams, you may now start wondering what they are used for. According to Section 7.11.4.1 of ISO 32000, they are primarily intended as a mechanism to ensure that external references in a PDF (i.e. references to other files) remain valid. It also states:

The embedded files are included purely for convenience and need not be directly processed by any conforming reader.

This suggests that the usage of embedded file streams is simply restricted to file attachments (through a File Attachment Annotation or an EmbeddedFiles entry in the document’s name dictionary).

Here’s a sample file (created in Adobe Acrobat 9) that illustrates this:

http://www.opf-labs.org/format-corpus/pdfCabinetOfHorrors/fileAttachment.pdf

Looking at the underlying code we can see the File Specification Dictionary:

37 0 obj
<</Desc()/EF<</F 38 0 R>>/F(KSBASE.WQ2)/Type/Filespec/UF(KSBASE.WQ2)>>
endobj

Note the /EF entry, which means the referenced file is embedded (the actual file data are in a separate stream object).

Further digging also reveals an EmbeddedFiles entry:

33 0 obj
<</EmbeddedFiles 34 0 R/JavaScript 35 0 R>>
endobj

However, careful inspection of ISO 32000 reveals that embedded file streams can also be used for multimedia! We’ll have a look at that in the next section…

Embedded file streams and multimedia

Section 13.2.1 (Multimedia) of the PDF Specification (ISO 32000) describes how multimedia content is represented in PDF (emphases added by me):

  • Rendition actions (…) shall be used to begin the playing of multimedia content.

  • A rendition action associates a screen annotation (…) with a rendition (…)

  • Renditions are of two varieties: media renditions (…) that define the characteristics of the media to be played, and selector renditions (…) that enables choosing which of a set of media renditions should be played.
  • Media renditions contain entries that specify what should be played (…), how it should be played (…), and where it should be played (…)

The actual data for a media object are defined by Media Clip Objects, and more specifically by the media clip data dictionary. Its description (Section 13.2.4.2) contains a note, saying that this dictionary “may reference a URL to a streaming video presentation or a movie embedded in the PDF file“. The description of the media clip data dictionary (Table 274) also states that the actual media data are “either a full file specification or a form XObject”.

In plain English, this means that multimedia content in PDF (e.g. movies that are meant to be rendered by the viewer) may be represented internally as an embedded file stream.

The following sample file illustrates this:

http://www.opf-labs.org/format-corpus/pdfCabinetOfHorrors/embedded_video_quicktime.pdf

This PDF 1.7 file was created in Acrobat 9, and if you open it you will see a short Quicktime movie that plays upon clicking on it.

Digging through the underlying PDF code reveals a Screen Annotation, a Rendition Action and a Media clip data dictionary. The latter looks like this:

41 0 obj
<</CT(video/quicktime)/D 42 0 R/N(Media clip from animation.mov)/P<</TF(TEMPACCESS)>>/S/MCD>>
endobj

It contains a reference to another object (42 0), which turns out to be a File Specification Dictionary:

42 0 obj
<</EF<</F 43 0 R>>/F(<embedded file>)/Type/Filespec/UF(<embedded file>)>>
endobj

What’s particularly interesting here is the /EF entry, which means we’re dealing with an embedded file stream here. (The actual movie data are in a stream object (43 0) that is referenced by the file specification dictionary.)

So, the analysis of this sample file confirms that embedded filestreams are actually used by Adobe Acrobat for multimedia content.

What does PDF/A say on embedded file streams?

In PDF/A-1, embedded file streams are not allowed at all:

A file specification dictionary (…) shall not contain the EF key. A file’s name dictionary shall not contain the EmbeddedFiles key

In PDF/A-2, embedded file streams are allowed, but only if the embedded file itself is PDF/A (1 or 2) as well:

A file specification dictionary, as defined in ISO 32000-1:2008, 7.11.3, may contain the EF key, provided that the embedded file is compliant with either ISO 19005-1 or this part of ISO 19005.

Finally, in PDF/A-3 this last limitation was dropped, which means that any file may be embedded (source: this unofficial newsletter item, as at this moment I don’t have access to the full specification of PDF/A-3).

Does this mean PDF/A-3 supports multimedia?

No, not at all! Even though nothing stops you from embedding multimedia content (e.g. a Quicktime movie), you wouldn’t be able to use it as a renderable object inside a PDF/A-3 document. The reason is that the annotations and actions that are needed for this (e.g. Screen annotations and Rendition actions, to name but a few) are not allowed in PDF/A-3. So effectively you are only able to use embedded file streams as attachments.

Adobe adding to the confusion

A few weeks ago the embedding issue came up again in a blog post by Gary McGath. One of the comments there is from Adobe’s Leonord Rosenthol (who is also the Project Leader for PDF/A). After correctly pointing out some mistakes in both the original blog post and in an earlier a comment by me, he nevertheless added to the confusion by stating that objects that are are rendered by the viewer (movies, etc.) all use Annotations, and that embedded files (which he apparently uses a a synonym to attachments) are handled in a completely different manner. This doesn’t appear to be completely accurate: at least one class of renderable objects (screen annotations/rendition actions) may be using embedded filestreams. Also, embedded files that are used as attachments may be associated with a File Attachment Annotation, which means that “under the hood” both cases are actually more similar than first meets the eye (which is confirmed by the analysis of the 2 sample files in the preceding sections). Contributing to this confusion is also the fact that Section 7.11.4 of ISO 32000 erroneously states that embedded file streams are only used for non-renderable objects like file attachments, which is contradicted by their allowed use for multimedia content.

Does any of this matter, really?

Some might argue that the above discussion is nothing but semantic nitpicking. However, details like these do matter if we want to do a proper assessment of preservation risks in PDF documents. As an example, in this previous blog post I demonstrated how a PDF/A validator tool can be used to profile PDFs for “risky” features. Such tools typically give you a list of features. It is then largely up to the user to further interpret this information.

Now suppose we have a pre-ingest workflow that is meant to accept PDFs with multimedia content, while at the same time rejecting file attachments. By only using the presence of an embedded file stream (reported by both Apache‘s and Acrobat‘s Preflight tools) as a rejection criterion, we could end up unjustly rejecting files with multimedia content as well. To avoid this, we also need to take into account what the embedded file stream is used for, and for this we need to look at what annotation types are used, and the presence of any EmbeddedFiles entry in the document’s name dictionary. However, if we don’t know precisely which features we are looking for, we may well arrive at the wrong conclusions!

This is made all the worse by the fact that preservation issues are often formulated in vague and non-specific ways. An example is this issue on the OPF Wiki on the detection of “embedded objects”. The issue’s description suggests that images and tables are the main concern (both of which aren’t strictly speaking embedded objects). The corresponding solution page subsequently complicates things further by also throwing file attachments in the mix. In order to solve issues like these, it is helpful to know that images are (mostly) represented as Image XObjects in PDF. The solution should then be a method for detecting Image XObjects. However, without some background knowledge of PDF‘s internal data structure, solving issues like these becomes a daunting, if not impossible task.

Final note

In this blog post I have tried to shed some light on a number of common misconceptions about embedded content in PDF. I might have inadvertently created some new ones in the process, so feel free to contribute any corrections or additions using the comment fields below.

The PDF specification is vast and complex, and I have only addressed a limited number of its features here. For instance, one might argue that a discussion of embedding-related features should also include fonts, metadata, ICC profiles, and so on. The coverage of multimedia features here is also incomplete, as I didn’t include Movie Annotations or Sound Annotations (which preceded the Screen Annotations, which are now more commonly used). These things were all left out here because of time and space constraints. This also means that further surprises may well be lurking ahead!


Johan van der Knijff
KB / National Library of the Netherlands

14 Comments

  1. ecochrane
    January 14, 2013 @ 10:01 PM Europe/Berlin

    Hi Johan,

     

    Yes I do agree with you. I can see the utility in what you have done. It’s very valuable. For objects that have been transferred that we do not know much about then it is always going to be a case of trying to do the best we can. However that shouldn’t mean that we do that for everything new coming in going forwards and that is what concerns me; that this will be the default process for everything.

    Assuming an ingest of a constant percentage of digital objects from all sources (which is probably optimistic if I’m honest – however), with the rate of growth of digital information it should only take a few years for the volume of new digital objects in any repository to surpass the volume of objects that are currently in the repository.  Given that, those old files you speak of will quickly become a small percentage of what you hold. Having said that, I do acknowledge that those files will likely be some of the more interesting precisely because of their age and rarity, and so they probably do deserve special treatment. 

     

     

  2. paul
    January 11, 2013 @ 12:17 PM Europe/Berlin

    1) I strongly disagree with your thoughts here. Over the last couple of years I’ve worked with a lot of different practitioners (80+) from a number of institutions (60+) and in the vast majority of cases they don’t know enough about their data, never mind have checksums, rendering stack details or any other useful stuff. I should say, I know these details as I’ve just been putting them on a poster for IDCC next week, and have also previously blogged here about analysis of these results. “Definitely possible (and often simple)” could not be further from the truth. The exception is probably some archives where its possible to build up a relationship with the depositor and get more out of them. Based on the evidence collected from many organisations (including archives) that I’ve outlined above, these cases are few and far between however. I’d also pick up on this quote: “requiring documentation of the interaction-stack is not a particularly onerous standard to require?”: it is onerous if you know nothing about these issues, which is the case with many depositors.I’d also quickly mention Andy’s “Formats over Time: Exploring UK Web History” paper. He found (for PDF) that “later years have seen an explosion in the number of distinct creator applications, with over 2100 different implementations of around 600 distinct software packages”. Obviously we need some more data and analysis here, but this suggests a very complex picture for PDF that I don’t think we fully understand yet.

    2) Granted, it does depend on the location, but I suspect will be illegal in the majority of countries. Again, I’d question the practicality here. Building a software archive is resource intensive. Resources that the majority of repositories simply don’t have.

    3) Yes, I’m looking forward to you guys completely solving this one *:-)

    4) So now you’re suggesting depositors will list external dependencies as well? This would of course be lovely, but is unfortunately nowhere near any reality I’m familar with.

    5) Edge case? Well maybe, but an all or nothing critical one! Again you’re assuming a lot about the depositor here. If you don’t pick up encryption very soon after deposit, you’re likely to be completely stuck. It’s important.

    As you know, I’ve been an advocate of emulation for a long time and did quite a bit to change perceptions about emulation. But I really don’t think it offers a solution to the PDF problems we’re discussing here. That doesn’t mean I’m necessarily advocating migration though.

    Your comment about investment in a migration approach is rather interesting. I’d not thought about it like that before. I’ll digest that a bit more before commenting further on it.

    Re: preservation risks: yes – anything that’s going to get in the way of future access. I love Johan’s page on JP2, which says it all.

    Great chatting with you Euan, as always!

  3. johan
    January 10, 2013 @ 5:26 PM Europe/Berlin

    Great comment. To answer your questions:

    The problem

    Yes, I think you’ve largely outlined the problem here, although I wouldn’t necessarily put it down to our characterisation possibilities being too weak. I think a lot of the work that has been done on (PDF) characterisation so far has been mostly about extracting as much information (from a PDF) as possible. This may have been partially driven by a focus on migration as a preservation strategy, where the main role of the characterisation tool is to record “significant properties” of source and target files. The aim here was mainly to track any unwanted changes in the migration process. At the same time a lot work has been dedicated to format validation (e.g. PDF validation by JHOVE). The problem here is that even a “valid” PDF may still have preservation risks associated with it (e.g. encryption, non-embedded fonts), so format validation alone is not enough. At the same time, even though a tool like JHOVE can give you a lot of information on a PDF (in terms of reported properties), most of it isn’t directly linked to concrete risks. To be clear, this is not a fault of JHOVE, but simply the result of the fact that it was designed with a different purpose in mind.

    The solution

    As for the solution: you’re suggesting something similar to what I did with jpylyzer. The comparison makes me a bit nervous, because it seems to suggest the development of a new tool, which I think is something we should avoid at all costs! Personally I think what we need here is a 2 stage process:

    1. For any PDF, identify all features/functionality that could pose a potential preservation risk
    2. Validate the outcome of 1. against an institutional (technical) profile

    Now this is actually pretty similar to what I did here with jpylyzer:

    http://www.openpreservation.org/blogs/2012-09-04-automated-assessment-jp2-against-technical-profile

    Now let’s assume that “all features that could pose a potential preservation risk” can be approximated by everything that is not allowed in PDF/A (e.g. PDF/A-1, which is the strictest profile). In that case, all we need for stage 1 is a software tool that is able to compare any given PDF against the PDF/A profile, and report back the deviations (i.e. all features encountered that violate PDF/A). These tools already exist, they’re PDF/A validators. Some first tests with an open-source one revealed a number of problems, but also showed a lot of promise. There are also several commercial tools; this 2009 study (which it seems has been largely overlooked by the digital preservation community) suggests that some of them are actually pretty good. So we can probably do this already!

    Then the next step is to compare the output of those tools against our institutional profile. This is something we could do in a similar fashion to what I did with jpylyzer; if the PDF/A validator output is nicely formatted XML a bunch of (relatively simple) Schematron rules would probably be all that’s needed. The main difficulty here is that you need a pretty good understanding of how preservation risks, PDF features and the validator’s error codes are interlinked. This is somewhat complex, but definitely doable.

    How to make it happen

    To me, the most important thing is not to start reinventing the wheel, and try to minimise any development efforts. If we restrict ourselves to open source tools, Apache Preflight looks like the most promising one, but apparently it’s still in its early stages, it needs further testing and the version that I tested had problems that would seriously limit its use in practical settings. However, the developers have promptly picked up on my bug reports and some of these problems should be fixed in the latest version (haven’t had any time for testing yet). So one possible avenue would be to, as a community, get more involved with the further development of this tool. Right now we’re considering some scalability tests on a very large dataset within SCAPE (actually Clemens Neudecker came up with this idea some days ago, we will discuss this further at the upcoming scenario workshop later this month). This may involve some development work as well (e.g. we would probably need an XML output handler). The second stage (validation against an institutional profile) would mainly require research effort, and probably not much development.

    An alternative would be to go for commercially available tools. The advantage would be that it doesn’t involve any development in the first stage, as mature tools already exist (some testing would of course be needed). The main disadvantage would be that it makes a collaborative effort for stage 2 very difficult.

    I would love to be involved in this myself, but I have to see how much time in can dedicate here. I will probably know more after the SCAPE scenario workshop (i.e. late January).

    As for hackathons: not sure what they might contribute in this case. They may be fine for relatively small, ad-hoc issues, but what we need here is mainly a matter of sustained research and development by the right people. But that’s just my own opinion of course!

    Phew, and I really tried to make my reply brief…

  4. paul
    February 4, 2013 @ 5:00 PM Europe/Berlin

    Hi Euan,

    I forgot to reply to your last message – apologies for that. You make some good points here. I love the quote from David's paper at the start. It is, as he says, well worth making that point clearly. And as you allude at the end we need more evidence. We need to capture and share our experiences as a starting point for building a consensus as to best practice approaches. Regardless of our disagreements on dealing with PDFs, I'm 100% behind you on this more general and really important point!

    I noticed however, that you didn't quote back this crucial bit from my previous post:

    Euan said "I do concede that getting this information from users is not always going to happen."

    Paul said "So if we agree that donors/producers won't *always* be able to provide this information… …surely we then have to characterise everything anyway?"

    Isn't this the deal breaker in your position? If we don't *always* get the info from the publisher, we still need some characterisation. If that's the case we may as well characterise everything….

    Paul

Leave a Reply

Join the conversation