And so we’re about to round out 2015. I haven’t done much public of note (other than what’s already documented on this blog), but hopefully everyone has had a productive and enjoyable year. Here’s to a 2016 that’s even better.
And so we’re about to round out 2015. I haven’t done much public of note (other than what’s already documented on this blog), but hopefully everyone has had a productive and enjoyable year. Here’s to a 2016 that’s even better.
If you’re a U.S. citizen under 27 years old, the sunsetting of the USAPATRIOT Act’s Section 215 provision at midnight tonight will mark the first time since before you were a teenager that the U.S. government will not have legal cover to collect data on every phone call you make. (Barring yet more secret interpretations of existing laws.)
Reasonable people can have reasonable debates about what surveillance should be allowed. For example, it’s reasonable to allow police to follow a murder suspect. However, these debates must be had, must be had in public, and must include the public. The USAPATRIOT Act had none of these qualities, and it is high time that some of its sections have now “sunsetted”.
While several portions of the act will likely be reauthorized in the coming days, potentially with some reforms, it is important to not simply accept minor changes as the only possibility. We can – and should – continue to protect privacy as an important civil liberty.
Along with several coworkers, I participated a bit in the PlaidCTF 2015 competition this past weekend. The competition is geared mainly toward binary exploitation, with reverse engineering a secondary priority. There are also a number of challenges which focus on cryptography, forensics, and other security-related issues. Unfortunately, my skills lie more toward the latter category than the former, so I wasn’t as broadly useful as I might have otherwise liked to be. Nonetheless, I was able to contribute solutions to three problems: “RE GEX” (a large regular expression, where it was necessary to find a string that didn’t match the regular expression), “PNG Uncorrupt” (a PNG file with several missing “\x0d” bytes before “\x0a” bytes), and “radhos” (a Python hash collision challenge). I’ve seen good writeups online for RE GEX and PNG Uncorrupt, but haven’t yet seen any good ones for radhos, so I’m contributing my solution here.
radhos is a simple HTTP server that emulates a small hashtable-based key-value store. Each value is stored using the hash of its key, where the hash is the lower 32 bits of Python’s
hash() function. (This appears to be intended to give consistent behavior between 32-bit and 64-bit versions of Python.) The level’s flag is stored in the hashtable’s location for the key “you_want_it_LOLOLOL?”, but the server will reject requests that ask for that key by name. Therefore, the remaining solution is to find something that will generate a collision in
hash(key). To encourage players to solve the challenge without simply guessing random keys, the server restarts (and gets a random
hash() seed) every seven minutes.
Using HTTP requests, you can get the value for a key you select, or alternatively, you can set the value for a key. Setting the value for a key will also return the old value in that location in the hashtable, if any, as well as the value of
hash(key). It’s this information leak that allows us to break Python’s randomized hash function, generate a new key that has the same hash value as “you_want_it_LOLOLOL?”, and retrieve the flag.
I initially thought that this would be an instance of the bug described at https://bugs.python.org/issue14621, but that doesn’t appear to be the case. I spent far too long looking at that issue, and trying to replicate similar situations that might allow a collision in this case. However, the issue does have an attachment, hash.py, that gave Python code for the hash algorithm, which was somewhat useful for analysis. The (pre-3.4) Python
hash() function used two secret values to obscure the hash value: a “prefix” that affects the whole hash, and a “suffix” that is simply XOR’d in afterward to obscure the result and prevent attackers simply guessing the prefix. If an attacker is able to determine the “prefix”, they can generate hash collisions on their own. (Having the suffix is unnecessary if collisions are the only goal, as it is simply XOR’d on at the end.)
Here’s a bit of code for generating the hash:
HASHMASK = 0xffffffff def hashbytes32(data): if len(data) == 0: return 0 x = prefix x ^= ord(data) << 7 for byte in data: x = (1000003 * x) ^ ord(byte) x &= HASHMASK x ^= len(data) if x == -1: x = -2 return x
Luckily, there was a talk on hash collisions (Hash-flooding DoS reloaded: attacks and defenses) that referenced a tool that could be used to determine the hash secret in Python. I was able to find the tool at https://131002.net/siphash/poc.py, and verified that it worked.
Unfortunately, it worked based on hashes for “\x00” and “\x00\x00” (one and two null bytes), which we couldn’t submit to the radhos server (as far as I know). A small amount of math allows us to change the target used in the algorithm, however. Let’s say that we have
h1 = hash("A") and
h2 = hash("B"). Then we know:
h1 = ((prefix ^ (65 << 7)) * 1000003) ^ 65 ^ 1 ^ suffix h2 = ((prefix ^ (66 << 7)) * 1000003) ^ 66 ^ 1 ^ suffix
Let’s XOR the two, and note that the
suffix terms cancel out, as they are on both sides. Then we have:
h1 ^ h2 = ((prefix ^ (65 << 7)) * 1000003) ^ 65 ^ ((prefix ^ (66 << 7)) * 1000003) ^ 66
The tool I mentioned above uses this fact, and guesses the value of
prefix one bit at a time, narrowing down possibilities as it goes. It then performs a verification step, testing those possibilities with a known hash of a third value. I was able to verify that I received the correct prefix value by testing locally, so I was satisfied with that part of the code.
As an aside, if the bit-by-bit guessing didn’t work to determine the
prefix, it would be possible to use the
h1 ^ h2 math above and several additional hashes to brute-force the prefix, but that would likely take a while. Distributing the problem among multiple cores (or computers) might get a result before the server restarted, however, so it might have been worthwhile if the proof of concept code were not available. (Note that brute-forcing the
suffix is unnecessary in this case, as XOR’ing the hashes removes it from consideration.)
Simply having the
prefix value is insufficient, as we need to actually generate something that will collide. The obvious solution is a brute-force attack. This is easy to implement, and can be parallelized to get a result before the server restarts. Unfortunately, it’s also fairly slow in general, when another relatively simple solution is available.
Because the hash code is relatively simple, we can perform what is known as a “meet in the middle” attack. In this case, we start with the result we want (we can already calculate the hash for “you_want_it_LOLOLOL?”, because we have
mask = 0xffffffff
def hashback(instr, target):
val = target
for byte in instr[::-1]:
val = ((val ^ ord(byte)) * 2021759595) & mask
This works because 1000003 * 2021759595 = 1 mod 232, so multiplying by 2021759595 is equivalent to dividing by 1000003. (Note that in this function, for simplicity, we have already removed the
suffix and the size of the string to be hashed.)
So we’ve worked backward and determined what the hash state should be for a given suffix. How does this help? Well, we can do the same thing for many suffixes, and have many possible targets for our brute-forcing. The most efficient way to do this is to try 216 suffixes. This turns the brute-forcing from an expected 232 operations with 1 unit of storage into 2*216 operations with 216 storage. RAM is relatively plentiful for this task, so it’s a good trade-off. I’ll let the code below speak for itself, but feel free to leave a comment if you have a question.
The code I ended up using is below, after some minor cleaning up:
Almost two years ago, I launched (what became) my 2012 book archive. There’s a bit of background on that project on that page. While there have been a few minor developments since then, none of them have been noteworthy enough to document here. Recently, however, I decided that I would try to make PDF copies of the books. I wanted these to be good-quality PDFs, as although a number of PDFs of the books had circulated from other groups in the past, they weren’t particularly visually appealing: they were essentially what you would get by quickly printing each of the HTML files from the publisher.
My guess was that a simple HTML file with all of the book content could be combined with a quick print CSS style and get good-quality output without too much effort. I figured that with a decent number of iterations on the book CSS and test runs, the project might take a week or so. Over two months later, I’m finally set to release the PDFs. I’ve made a few notes of the things that I did along the way, in case anybody finds them useful in the future.
If you just want the PDFs, you can visit the 2012 book archive and download them there. There are a few ways to download each book: you can download a whole-book PDF file, or a PDF for each chapter. Both are accessible from a book’s table of contents, and the whole-book PDFs can be downloaded from the archive’s book list as well.
It’s worth noting that the content I have from the books is already in an HTML format, not necessarily the raw input to a book creator. I decided early on to try to work with HTML and CSS, because although the HTML I have is structured and could probably be parsed into a different format, doing that correctly for over a hundred books would be iffy. With that said, the HTML was well-structured, and lent itself to easy use of CSS to style specific things.
If you’ve been following CSS development, there are a lot of features in CSS that are supposed to allow styling of printed content. Unfortunately, support for some of these in browsers is spotty, and they don’t necessarily provide all of the features that you’d like when styling a book. (In particular, footnotes, page-relative positioning, page numbering, and PDF bookmarks are all difficult. There are likely other features I’ve forgotten about as well.)
At first, I was hoping to be able to use something like wkhtmltopdf, which wraps WebKit and converts an input file to a PDF. This gave me a number of problems, and didn’t seem to support the concept of pages as natively as would be desirable. (It’s still impressive that they managed to get the project to work as well as it does, but it doesn’t produce nice-looking books yet.) After that, I decided that perhaps Firefox’s support for printing would work for me: it works pretty well with many of the CSS printing features, and I can probably script Firefox to output PDFs if I want. Unfortunately, I again ran in to bugs with the rendering. I don’t recall the details at the moment, but I believe that content had a tendency to not wrap between pages in the right spots. Either way, this sent me in search of a high-quality PDF rendering solution.
If you look online for advice about generating PDFs from HTML, you will inevitably run upon many people suggesting PrinceXML (or, as it seems to be rebranding itself, Prince). They’re probably right. It is a commercial piece of software in a case where I had hoped to use free software, but it is still the best solution I have found by far, both in ease of use and functionality.
Prince itself is not cheap. A personal license is $495 at the time of writing, and even that may not cover what I intend to do in terms of converting books automatically. (To be clear, it might be covered, but only just barely if it is. I haven’t asked, for reasons I’ll explain shortly) If you are doing anything serious with Prince, you’re probably looking at a $3800 license per server generating PDFs, or a $1900 one if you’re only doing academic things. Upgrades are available for an additional annual cost. To be clear, if you’re generating revenue from the PDFs (or even just saving yourself loads of time), Prince is almost certainly worth every penny, but it’s prohibitive for side projects.
For non-commercial projects, Prince offers a free version with the requirement that you allow it to add a logo and link to the corner of your document’s first page, link to their website wherever you have Prince PDFs for download, and link to their website on a sponsors/partners page. This is mostly unintrusive (although a tad confusing at first: I’ve considered trying to style in a little “Made with” above the logo to explain why it’s there), and very nice of Prince to allow. (To get the “Non-commercial” license, just download the software: you don’t need a special license key or anything.)
In fact, I had a question about their licensing (“The books are licensed under a Creative Commons license that doesn’t allow me to add restrictions to them, so is it required for people who receive the PDFs from me to keep the Prince logo on them? If so, I can’t use the noncommercial license.”), emailed them, and got an email back quite quickly from Håkon Wium Lie, Prince’s Director (not to mention CTO at Opera and founding member of the Pirate Party of Norway). He’s definitely on top of things, and was quite happy to help. (The answer is no, other people can do whatever they want to the PDFs. In my case, they’re still subject to the Creative Commons license they always were, but that’s not because of Prince.) Later, I had a question about how to get something to render correctly (a somewhat minor, obscure layout bug), and quickly received a comment from Mike Day, the CEO, noting that they were looking into the issue. When I followed up, the bug hadn’t yet been fixed (it undoubtedly has tricky interactions with their page layout code), but I quickly received an alternative suggestion complete with example code. Definitely a pleasant experience all the way around.
If you’re looking for a cheaper option to start with, you’ll probably run into DocRaptor as well. DocRaptor started out as Prince-as-a-service, providing an API to allow people to generate PDFs using Prince. It now appears to support Excel files, although I haven’t looked in to those features. For many people, the benefits of being able to rely on DocRaptor to scale up as your workloads do (they claim “thousands of documents a second”) and the lower initial costs are probably a great benefit. They also provide well-supported libraries for a number of languages, where Prince usage is largely done by command line (although Prince has a PHP API as well). Overall, DocRaptor almost certainly provides benefits for many people. However, their plans aren’t super cheap either, and they’re targeted at recurring use, not one-shot uses like mine. I generated over 2500 PDFs in my final output (one per book, plus one per chapter), which would probably have cost me $149 in a month, assuming I didn’t want to tweak them later. Still far cheaper than the cheapest Prince license, but pricey for a personal side project like mine.
DocRaptor does have a 7-day free trial, which probably would have allowed me to generate whatever I wanted during that time, but that’s not exactly ideal, either. (Nor do I mind paying something for the service, but over a hundred dollars a shot is high for my purposes.) I emailed the DocRaptor folks about a pay-as-you-go plan (so I wasn’t paying monthly fees when I wasn’t using the service), because I had found references to such a plan elsewhere. I got a very nice response from Matt Gordon, the “lead vocalist” for the group running DocRaptor. Unfortunately, they no longer offer that plan, because they found that disproportionately more of their support costs (and they do provide good support) were going to users who didn’t spend much on the service anyway. We had a nice conversation about the possibility of plans that might support alternative uses such as mine, but it doesn’t sound like there’s anything planned in the immediate future. (I can’t blame them, as they need to make money and do what makes sense for their business to continue existing.) They did make a very nice offer (I won’t disclose the details) that I turned down for unrelated reasons, but they’re definitely nice folks too.
My conclusion is that you pretty much can’t go wrong with Prince or DocRaptor. Both have very nice and responsive folks behind them, and seem to be quite well done.
Secondly, I wanted to make sure that chapters and sections were listed in the PDF list of bookmarks. This list is sometimes useful when navigating a book in a PDF viewer, although some viewers don’t show it. Prince again makes this quite easy, simply requiring a CSS annotation for the items you wish to be bookmark headings. (In fact, by default it uses h1-h6 tags, but I disabled that default because it picked up way too many bookmarks.)
In creating the full-book files, I noticed that some books created particularly large files. In general, this appeared to be because they embedded the full source images, rather than resampling them. While an option to resample the images inside Prince would be great, it doesn’t exist at this time. Some of the source images were quite large, and clearly intended to be printed at >= 300 dpi, while most users of the PDFs wouldn’t benefit from such images. My first attempt at reducing file size was to use Ghostscript to resample the images. Ghostscript has some features that work similarly to the now-unavailable Acrobat Distiller, and seemed likely to do the job. Unfortunately, after getting Ghostscript working (Ubuntu 14.04’s version appears to crash on larger documents, but 14.10’s works), I found that it removed page numbering information and bookmarks. The next step was to try to export this metadata using PDFtk before using Ghostscript, and then import it again afterward. Unfortunately, while PDFtk will output page numbering details, it won’t import them into a PDF, and there doesn’t appear to be any easily-available way to do so.
So, I temporarily abandoned the option to resample the images using Ghostscript. (It also may or may not have been worth it in the first place: some Ghostscript-generated files were larger than the Prince originals, so I had to handle both cases.) It may be worth patching Ghostscript in the future to keep the metadata around, but that seems likely to be quite involved. In many cases, you may get some benefit out of using Ghostscript with appropriate options (“gs -q -sDEVICE=pdfwrite -dPDFSETTINGS=/printer -dBATCH -dNOPAUSE -sOutputFile=[outfile] [inputfile]” seems to work well besides the additional metadata), but it was unfortunately unsuitable for my purposes at this time.
Following up on the “whole book PDFs can be pretty big” issue, and after trying to open some such books and experiencing slow loading times, I decided that it may be appropriate to create one PDF per chapter as well as the whole-book PDF. My first pass at creating these PDFs was to use PDFtk to pull out just the pages from a given chapter. This posed a few problems: first, I had to figure out which pages belonged to which chapter. Luckily, the bookmarks inserted by Prince, combined with PDFtk’s metadata output, gave me the starting page for each chapter (although for a few minor reasons, this link was a bit iffy: the generated bookmark title did not always match the section name I had in my database), and I could assume that a chapter ended just before the next one began. Unfortunately, this ran into the same problem I had before: I would lose the page numbering and bookmarks. (Not to mention the fact that I would need to separately render a new first page to describe the licensing and get the Prince logo back on the first page.) Finally, I decided to simply depend on Prince once again.
This solution appears to have worked surprisingly well, with the page numbers matching up where expected. Because the files were rendered separately, there is the possibility of some unforeseen issue (I certainly didn’t inspect the thousands of files by hand), but it seems unlikely.
Finally, when reviewing one of the math textbooks in the collection, I noticed that Prince’s MathML rendering wasn’t particularly great. It is definitely better than nothing, but the rendering quality did leave something to be desired. Unfortunately, the most common web-based solution here, MathJax, doesn’t work very well with Prince. (This is a noted todo item on Prince’s release notes, but it’s not available yet.) After stumbling through a number of other options to try, I ended up using PhantomJS together with MathJax to prerender the math to MathJax’s “HTML-CSS” output (the SVG output didn’t look very good and produced a very large PDF file after the required fixes to make Prince display the SVG output). I forced MathJax to use the STIX fonts (which I installed on my computer), and after the math was rendered, I output the document’s HTML form again (after removing the MathJax wrapper divs). This produced files with reasonably good-looking math, the way they were intended to look. The prerendering code hasn’t been published yet because I haven’t taken the time to clean it up, but if someone is interested, I can definitely post it.
Prerendering with MathJax is a step that seems to have very poor asymptotic time complexity. I haven’t formally benchmarked it, but a chapter’s sections took about two minutes to prerender in total, while the whole chapter itself took roughly twelve minutes to prerender. The whole book took roughly four days to prerender. It’s not clear why this occurred, but the prerendering did eventually succeed. It’s also not clear if this is a bug in MathJax, or simply some inefficiency in PhantomJS, so I have yet to report it as a bug to either project (and may never report it – it’s unlikely to come up in common use).
So, to summarize, getting PDFs of a quality I’m comfortable with took quite a bit of effort. In the end, Prince does most of the work, and I rarely had problems with Prince itself. I think it was worth the effort, at least for a personal learning experience. Hopefully the books will be useful to other people as well. Once again, they’re all available at http://2012books.lardbucket.org. Please feel free to copy or redistribute them as you see fit, pursuant to the terms of the associated Creative Commons by-nc-sa license.
P.S. If you’re interested in any of the print-specific (or Prince-specific) things I did to make the books look decent when printed, it’s all left in the book’s CSS file toward the bottom, under the “prince” @media type. Feel free to reuse any of that styling for any purpose you see fit, in any situation. I do not believe it is covered under the Creative Commons license: you may consider it to be public domain.
Calcsy is a tool you can download and use on your computer to show (and save) the screen of your TI-84+ or TI-89 Titanium calculator.
It’s useful for projecting the screen large enough for other people to see, or for taking a screenshot to use in instructions. For example, I’ve used it in teaching how to graph functions on a calculator, and I suspect it will be similarly useful to other people.
I first wrote Calcsy almost two years ago, and have been (very) slowly making it better since then. It’s now to a state where I think it’s reasonable to release. At the moment, it’s only available for Mac OS X, although it should be possible to port to Windows if there’s enough demand. Hopefully it’s pretty self-explanatory, but feel free to let me know if you have any questions.
All you need is a TI-84+ (or TI-84+ Silver Edition) or TI-89 Titanium and a USB cable to plug it in to your computer. The program is free, and you don’t need any special software on your calculator. (You also don’t need one of the special “Presentation Link” adapters – your computer and a normal USB cable works just fine.)
Other details of note: I suspect it won’t work with the very new TI-84+ Color calculators, but I’m happy to try to make it work if someone wants to send me one. Also, the logo was made by David Felice, so thanks are due to him.
Anyway, it’s free, go check it out. Let me know if you have any questions/comments/problems/etc.
Over the past few years, a publishing company called Flat World Knowledge has been publishing a number of textbooks in several subject areas, from history to psychology to math. One of the features they have advertised is their “open” books, meaning in part that their books are available for free online to everyone. Until recently, this was nearly unheard of: students can now legally get their textbooks for free (while paying for extra features if they want them). While I had not heard about their books until recently (likely because they have few math books), this is definitely something I like, at least in the abstract.
Unfortunately, Flat World Knowledge has recently decided that the “open” model will not work for their publishing, because not enough people were buying their books. As much as I would like to argue that such a model should work, I’m sure they have more data than I do, and have undoubtedly done their analysis and decided that such a business model is unsustainable for them at this time. While I hope that they are able to offer their books in an open manner again in the future, they have at this point decided to restrict the way in which their books are available on their website, starting on January 1, 2013. (They have already started implementing this change, as well.)
The good news is that they previously published their books online under a Creative Commons license, a common license which allows redistribution (in particular, the attribution, share-alike, non-commercial license, version 3.0). This means that people have the right to continue to redistribute copies of the books, if they happen to have them.
I am still a bit disappointed: I would have liked Flat World Knowledge to succeed in their open publishing experiment. I would have liked more books to be available, and I would have liked even more companies to follow in their footsteps. Unfortunately, it appears as though that area may remain the realm of private or government financing for the moment.
I would like to remark for the inevitable debates to ensue in unseen boardrooms in the future that the Creative Commons license likely allowed Flat World Knowledge to have so many books. In nearly every foreword I had read, the authors extolled the open license of the book as a primary reason for publishing with FWK. Were it not for this license, it is entirely probable that FWK would not be in such a favorable position. The ability of others to share your books should be regarded as the feature so many authors see it as, rather than a liability.
Another year has passed, and with just one [other] published post here. I’m slowly starting to have more discretionary time, and to get some projects closer to completion. Barring any major surprises, I should have at least one or two things to release next year. (The problem, of course, with long-running projects is that there are few of them, and if I should decide for whatever reason to not release them, they tend to disappear entirely.)
In the arbitrary milestones department, GraphSketch passed 1.5 million graphs delivered a while back, and I should really pay attention for the 2 million milestone that’ll be coming up sooner or later. I’ve also obtained a full-time job, which surprisingly seems to take less time than school did at times. It seems to be pretty nice, although, as with many things, I haven’t yet taken the time to step back and thoroughly examine how it’s going. There will probably be more comments on the job in the future, but for the moment, allow me to disclaim everything posted here and say that unless an exception is clearly noted in a post, the posts on this blog represent only my own views (if that), and have never been, and will never be intended to represent the opinions of my employer, school, friends, colleagues, or anyone else.
So, this makes for a particularly lackluster sometimes-annual update, but it’s about everything I have to say. Have a Merry Christmas if that’s your thing, and have a great new year when that rolls around. With any luck, I’ll have something new to post in January.
So, earlier today, Karl Fisch asked for a copy of an Olympics video from NBC’s website to use in his school. A number of the “standard” ways of getting the video didn’t seem to work, so I figured I’d pitch in. By the way, the first thing to try (if whatever addons you might have to download YouTube videos don’t work – although I don’t use any myself, I’m told they didn’t work in this case) is to click the YouTube button.
The YouTube button will usually open the video up into YouTube’s website, where most addons for downloading a video will work better, and there are plenty of other websites happy to help you.
In this case, clicking the button only got to the NBC Olympics home on YouTube. A quick search of the channel showed that the video I was looking for wasn’t on the NBC Olympics channel, and I’d have to find another way to download it.
So, over the past year or so, I’ve been working on a project I’m calling Portrait. It’s [going to be] a piece of open-source software system administrators can use to manage Linux computers. Unfortunately, there’s still a fair amount of work to be done on Portrait, because my free time is pretty sparse. I’d also like to launch a hosted version of Portrait, which people can use if they don’t want to deal with setting up and maintaining a copy of the software. To help get both of those things moving, I’ve set up a Kickstarter project, titled simply Portrait: Linux System Management Tool to raise funds to let me work on Portrait over the summer and set up a hosted version that people can use.
Most of my readers won’t personally have any use for a Linux system management tool, but if you happen to know anyone you think might find it useful, I’ll ask you to please pass the Kickstarter project along to them. If you’re interested yourself, please head on over to the Kickstarter project page, where I’ve put much more information on exactly what Portrait will do.
So, I took a few hours to go through and make it work with Closure, and then to make it work compiled outside of Closure in case anyone else wants a smaller version of FABridge. I’m calling the result FABridgeC, and you can read more about it on the project’s GitHub page. For most users, this is a minified FABridge.js (or if you prefer Closure’s terminology, a compiled FABridge.js) in just 5.4 KiB that can be dropped in instead of FABridge.js, and should work in exactly the same manner as the original for almost everyone. (There are a few minor caveats explained on the project page.) If you’re using FABridge at the moment, try dropping it in place of your existing FABridge.js, and let me know how it goes!
Feel free to send me a message via GitHub (I’m aschmitz, the repository’s owner) or leave a comment here with any questions, problems, or other comments.