TUG 2015 – Day 3 – second part

I’d like to complete my report, sorry for delays due to much work, here’s the second part of the final day of TUG 2015.

Following Julien Cretel’s talk about functional data structures in TeX, Hans Hagen made a presentation. The main points were

  • How far can you go with TeX?
  • Do you really want it?

Hans Hagen showed fascinating examples, such as TeX rendered text fed into MetaPost for postprocessing and re-rendering by TeX for justification.

One of his examples was about profiling lines. An example: there are two columns, everything has to be on the grid. TeX has paragraphs, but not a concept of a line. It’s pasting hboxes together, with heights and depths. TeX doesn’t natively have columns, but you could implement them. He showed an example of boxed columns, having all on the grid, such as with inline fractions. By checking the actual content, lines could stay closer in the grid when heights and depths of elements did not collide. He implemented a profiling mechanism. At the end, he did not use it. Well, except for this talk.

Most of you know the TeX macro \ignorespaces. In ConTeXt there’s now \removeunwantedspaces, \removepunctuation, and others. Content can be marked as punctuation, or tagged in any way. So, you can remove content marked somehow.

Finally he demonstrated some funny things about an asciimath implementation, where for example writing o twice becones an infinity sign, with all challenges.

Boris Veytsman continued with a talk about controlling access to information with TeX. It’s not only about security, but also about simply hiding boring information. Tech people may not be interested in financial information, while the finance people may not like to read technical information. So a document may contain both, but shows just the relevant part to each kind of reader. Output-level access control may be sufficient.

Regarding security, documents may contain an open part and a part with classified information. In case of security, there should be an input level access control, so earlier than in the former case.

Existing input control is via \includeonly combined with \include. There are disadvantages or restrictions. Every such part starts a new page. With a lot of parts or involved reader classes it can quickly get complicated. Separate files may be confusing.

A classic approach may look like

\newif\ifclassified
\ifclassified\input{classified.tex}\fi

Another solution is provided by the comment package. It provides environments for information in different versions.

For output level control, there’s now the multiaudience package.

A usage example:

\SetNewAudience{administrators}
\SetNewAudience{developers}
\SetNewAudience{executives}

The current audience can be set by

\renewcommand*{\CurrentAudience}{administrators}

Then you can work with visibility scopes, such as:

\showto{administrators}{Text visible only to admins}

Even with nesting:

\showto{administrators,developers}{Text visible to both\showto{developers}{Text visible only to developers} more visible to both}

Exlusion can be done via minus – sign. There’s a shown environment as companion. Special sectioning commands and footnote commands supporting visibility option. There are limitations: no verbatim text is allowed inside. However, you can use common workarounds such as \path of hyperref, \input inside \shown, or \SaveVerbatim of the fancyvrb package.

The beamer class offers a similar concept. It offers a presentation and a handout mode, so also visibility control. But the multiaudience package has been developed for supporting any number of such modes, or visibility classes. This is not secure, rather for hiding boring or non-relevant parts.

Regarding security, there should be source level control: Boris Veytsman showed the tool srcredact, which it is a Perl script with an input syntax inspired by docstrip. There are two modes:

  • extract text for a partial version
  • incorporate changes from a partial version

So there’s a two-way communication.

Implementation is done in Perl as mentioned, the merge is done by the diff3 program.

Finally, Enrico Gregorio showed examples of good and bad TeX codes. He talked about the spurious space syndrome, which has bitten all TeX programmers at some time. Or even worse, the “missing required space syndrome”. Missing protection of line endings is a classic. TeX friends actually had fun visually parsing code looking for spaces.

He talked about LaTeX 3 and showed various expl3 examples. He strongly recommended expl3: even if it adds a thousand of lines to load, it’s worth it – later it will be part of a format anyway. Though there are some disadvantages. The code is much more verbose. It still requires understanding expansion. He thanked the LaTeX3 team for their great work on expl3. I only can join the thanks.

Really finally, we had a question and answer session. One of the most important question: were and when will be the next TUG meeting? It shall take place in Toronto, around end of June and beginning of July.

I plan to post photos soon. I just need to ask the TeX friends on the photos if it would be ok. A first one is here: Forum: TUG 2015 conference reports.

Sadly, I had to disable unregistered commenting here because I spammers misused it for hundreds of ad posts each day (I kept it invisible and deleted the posts). But you can comment any time in the corresponding forum thread. Sorry for need to register (because of the spammers) but once registered you could help our forum users too, if you like. 😉 There are some unsolved topics.

Again, thanks to the TUG and to the sponsors DANTE e.V. and River Valley Technologies! And especially to Klaus Höppner, wo did a great organizer job.