The administration abundance chain, accepted for rolex replica uk its affordable accouterment and home apparatus offerings, is now peddling high-end artist accoutrement from above labels including, Chanel, uk replica watches and Zac Posen.The artist appurtenances will be awash on The Sears Marketplace through third-party vendors. The fake watches move may assume counterintuitive, but a Sears agent says it takes the administration abundance aback to its roots, which already included affairs cars.
vansimke – Page 2 – Failing Forward

Browse Author: vansimke

UML does not equal BDUF

If everything was awesome…

As you might have gathered from other posts on this site, I am a software engineer (or architect, whatever). In this role, I am constantly trying to understand how best to deliver my designs to the programming teams for implementation. In theory, I think that the Agile movement, with its highly iterative, communication focused philosophy is pretty close to ideal. In my experience, the biggest hindrances to a project’s successful completion are due to lack of communication, or competing political agendas. By developing the project in the open, with all of the stakeholders able to voice concerns throughout, there are more opportunities to identify and resolve communication issues that might develop.

but…

While the above statement is true, reality often throws in constraints that make the ideal untenable. I design software for a retail company. In this environment, our direct customers are often employees at our headquarters, but they are serving as proxies for the retail employees. In a typical project, we deliver software to them and, after they sign off on it, they create training material and prepare to introduce the software to the stores. In this reality, the cost of change after release is very high since there is a lot of work involved in the training. Rapid change cycles can lead to an enterprise level game of crack the whip. Since the people on the end of the whip are also the primary source of revenue-generation, you can imagine that we try to limit this.

so…

To work in this environment, agreement has to be reached as soon as possible, not just about the feature set, but also about the application’s design, in order to reduce the number of post-launch changes. Pulling these tasks up front reduces the likelihood of running into coding dead-ends that force a lot of rework (as an example, try adding “undo” to an application that isn’t use the command pattern). By designing the software in the software engineers design tool of choice (UML for me, thanks), the design of the application can be considered and (mostly) factored properly before the first line of code is written. It also provides an excellent reference base for considering how new features might be worked into the design later, since the design isn’t scattered through hundreds or thousands of source files.

all roses have thorns

The reason that Agile project management is generally more efficient than ones with large up front design considerations is that, when complete, they, in theory, give you the minimum amount of code to solve the problem.  This makes it easier to comply with the YAGNI philosophy. When done up front, the design must be more flexible to allow for the inevitable last minute shifts in requirements. However, I have found that this is not entirely bad. Enterprise software typically has a long lifespan and additional flexibility in the design allows it to absorb changes more gracefully than a purpose-built system might.

Lessons Learned from Creating my First Course for Pluralsight

Way back in July of 2014, a coworker and I decided that we would like to take a shot at becoming authors for Pluralsight. We had both used the site over the years and were impressed with the quality and breadth of the courses offered for such a reasonable rate. This site has become one of the core components of my toolbox for software development work.

In the beginning of January, my first course went live (check it out here), and so I thought I would take a few moments to reflect on how I got there.

The Audition

In order to become an author for Pluralsight, I filled out a simple form, and they contacted my to discuss what I wanted to teach about. While my interests cover a lot of ground, I had been doing a lot of design and development work with the Dojo Toolkit (dojotoolkit.org) and so I decided to start there. I had to submit a short clip illustrating my teaching style and my ability to put together a cohesive presentation on a topic. I decided that I would give an talk about how to add “toast messages” to a web-application. It was simple, easy to cover in a few minutes, and something that I had done several times recently.

It turns out that Pluralsight was in the midst of leveling up their expectations for how to deliver the course material and I got caught in the middle of it. When I first encountered the site, Pluralsight’s course offerings were aimed at disseminating the knowledge from industry experts to a wide audience. This often led to courses that, while containing very good content, lacked the world-class presentation that really makes it easy for the student to absorb. As they have grown, they have become more sensitive to how best to present content. This led me to a series of “revise and resubmit” requests that forced me to raise my own standards quite a bit. While the process was personally difficult, I can never thank them enough for taking the time to teach me some awesome presentation techniques that shape every presentation I make now. If you are interested, there is one course that was key: John Papa‘s course on public speaking is excellent and I basically used his suggestions as the template for my final (and successful) audition video.

The Course

After getting approved to be an author, it was time to author something. I was introduced to my editor and he walked me through the authoring process. As you might imagine, first-time authors are given more support and oversight in order to help them be successful. I was asked to submit several course ideas and then met with one of the content vice presidents to choose one. After that, I put together a more formal course outline and started to create the course material.

The Process

To create the course, I adopted a process that loosely matches my software development flow.

Course Design

Before I started recording, I spent quite a bit of time trying to figure out exactly what I wanted to talk about. During this phase, I used a mind-mapping tool in order to structure the basic course flow that allowed me to take the high level course outline and break it down into the detailed things that I wanted to talk about. This is also the time that I created all of the demos for the course. While this won’t be true for every course, this course was based on a single demo application that evolves throughout the course, adding new elements along the way. With the strategy, I found it very helpful to have the demo application’s evolution completely nailed down before I started to record.

Course Implementation

With the course outline complete, I entered what I call my “production loop” for each module of the course. It consisted of the following steps:

1) Create slides

In this phase, I created whatever slides I needed in the module, including presentation-level animations. This phase allowed me to really get an idea of the flow of the module and how the demo would integrate into it. Of course, I was also trying to organize my thoughts and decide what I might say at different points along the presentation. When the slides were done, I would setup the presentation to automatically transition between slides every second or so. This made it easy to record the slides and slice up the video later.

I would not record audio in this phase (or the next). I try to do that once or twice, but I found that there was too much for me to keep track of and the quality really suffered. As a result, the audio was handled separately and combined later.

2) Record demo

The preparation of the slides really helped me get my head around what I wanted to tell the student during that module. I would them record the demo or demos that were going to be used in the module. I would take a few minutes and prepare a “demo script”; this script listed what I wanted to do and in what order. This helped me avoid skipping steps which would require (painful) rework. If multiple demos were required for the module, I would still create them all at once. This allowed me to stay in the same place mentally (writing code and demoing results) rather than constantly context switching.

As a final part of this step, I would combine the two video feeds (slides and demo) into the order that I wanted for production.

3) Write script

Through the first two steps, I would think a lot about what I wanted to say, but I wouldn’t record anything. This allowed me to alter the direction of the slides or demo as I felt it needed, with no fear about having to re-record audio. With the video now in its production-intent order, I would watch through it and write the script for what I wanted to say throughout the slides and demo. This is also the part of the process where I would put the clip-boundaries that make up the different pieces of a module. In addition to breaking the work down for me, this also served as a nice quality check. Since I was watching the video to create the script, I was hyper-focused on what was going on. This allowed me to find many errors in the video and correct them.

4) Record audio

After the script was done, it was time to record audio. I would do this with only my script and recording software open. I had a pretty good idea of what the video would be showing (since I already had that), but I didn’t want the distraction of watching it while trying to read my script. I recorded the audio in a different application than the video (see the tools section below) to allow me to use the best tools I could for that work. I found this technique to be really productive for me because I could record an audio clip and rough edit it without worrying about resyncing the video to the audio. When a clip was recorded and edited, I would save it out as a separate file.

At the end of this process, my audio was basically production ready. I knew (within a few seconds) the final length of the module by adding up the length of the different clips. I also knew where I wanted the clip boundaries, since they aligned with the audio clips on a one-to-one basis.

5) Sync audio and video

At this point, I have all of the production audio and video, but they are completely out of sync. Due to how the previous steps have left the project, this is pretty easy to correct. At the end of recording the audio, those clips represent the production intent of the course. This meant that most of the editing involved simply altering the video to align with the audio. I would generally use three techniques – freezing the video on a frame, speeding up a section of the video (for large blocks of coding that don’t require a lot of explanation), or deleting a section of video and adding a transition (for ‘auto-completing’ a block of code).

This phase is also where I added the call-outs to the video – visual elements that are overlaid in order to bring the students attention to a specific part of the screen that I am talking about. I normally did this as I synced up a section, not as a second-pass after the synchronizing was done.

6) Final review and course metadata

At this point, the module is production-intent. I would always listen to the clips one more time in order to make sure that everything works together. Specifically, I would check for blank frames (where the video feed has a gap) or missed clip boundaries (the last frame from the previous clip is accidently moved to the first frame of the next causing a ‘flash’ in the beginning of the video).

Finally, I would package up the demo code, create the quiz questions, and complete some files that Pluralsight uses to wire the course into their site infrastructure. With that done, it was time to submit the course to my editor for review.

The Review

After a module was submitted, Pluralsight submitted it to a multi-step internal review process. Sometimes, despite my best efforts, issues would creep into the module that were not up to their standards. After the review was complete, I received feedback on the good and the bad in the module. Also, if any corrections were required, I would be informed about what had to be addressed.

The Tools

Mind Mapping – Mindmup

I have used several mind mapping tools over the years, but I have become completely addicted to always being able to access my data. I looked around and found mindmup to be an excellent web-based mind-mapping tool that allowed me to store my files directly on Google Drive. It can also run disconnected (as a Chrome App) so I could work on it, even if I didn’t have internet access.

Notes – OneNote

I’ve used Evernote quite a bit in the past, but I have recently moved to OneDrive as my cloud storage provider of choice. Since OneNote ties in seamlessly to OneDrive through its web-application, this became my goto choice. I used OneNote to record all of my high level ideas, scripts, and feedback so that I had all of it in one place that I could access from anywhere.

Storage – OneDrive

Pluralsight recommends the use of PowerPoint or Keynote for creating presentations. Since I am a Windows user and Microsoft has made it so economical to get OneDrive and Office together, I went that route. All of my course material is stored on OneDrive so that I get free backups and universal accessiblity.

Video Recording and Editing – Camtasia

Camtasia aims to be a one stop shop for recording, editing and producing video. I found it to be a very good at that, I was not thrilled by its audio editing capabilities. Most of this has to do with the fact that I tended to end up editing audio in the context of my video, which caused my problems. I ended up using another tool for audio editing and importing the complete audio clips into Camtasia for synching and editing.

Audio Recording and Editing – Audacity

Open source projects often amaze me and Audacity is one of them. I ended up using it as my goto tool for recording and editing the audio. Not only did it let me focus in on the audio (without the distraction of the video), but it also contains a wide-array of post-processing tools that allowed me to clean up many audio problems without forcing me to re-record. Given all of the power that it offers, I found it really easy to make many edits (inserting re-recorded clips, deleting sections, etc).

Images – Fotolia

Fotolia is an excellent source for images when creating presentations. One of the challenges that John Papa gives in his public speaking course is to resist the urge to create presentations that are composed of slide after slide of bulleted lists. High-quality images can go a long way toware making content easier to understand by the student. While I did use some of the free images made available through PowerPoint’s clipart, I found that spending a few dollars at Fotolia would allow me to find an image that exactly captured the sense that I was trying to convey on a slide.

Summary

Overall, I had an absolute blast creating this course and I look forward to creating many others for Pluralsight. If you think you have something interesting to talk about, feel free to reach out to them and start the authoring process. If you have any questions, feel free to reach out to them, or tweet me and I’ll do my best to help.

If you want to see the course, or any of the other courses I have published, check out my author page on Pluralsight.

Software Engineering’s most wanted: Programmers

Software is one of the most complicated engineered product being produced today. However, current “best-practice” in the software development world seems to be lagging behind the other engineering disciplines in at least one area – we still think that software engineer is another word for programmer.

I am a mechanical engineer by training and spent the first several years of my career as a product design engineer for an automotive company. When I started working, my group was composed exclusively of engineers. We were responsible for:

-working with program management to interpret and define requirements

– performing the calculations that would transform those requirements into product characteristics

– perform calculations to ensure that durability requirements would be met

– create 3D models of the design

– create drawings from the 3D models that would allow tooling to be built, etc

– answer design related questions throughout the product development

– oversee the validation and verification of the design via virtual and physical testing

When I decided to transition to software development several years ago, I realized that all of these roles mapped one-to-one from mechanical design to software design.

However, there was an interesting thing that happened in the mechanical design world that doesn’t seem to have occurred yet in software design: over time the products that my company was creating grew increasingly complex. One result of this was that the design engineers became increasingly overloaded as they tried to keep up with all of their responsibilities. The first attempt to solve this was to reduce the number of projects per engineer by hiring more engineers. This worked in the short term, but quickly ran into concerns about the cost of engineering. The long term answer was to tease out a subset of the engineers’ responsibilities and create a new role to manage that. Enter the draftsperson. This role isn’t new in the mechanical design industry, so it was a pretty obvious solution. The company started to employee drafters (experts in the creation of 3D models and the supporting drawings) and moved that responsibility off of the engineer. This role is typically much less expensive than an engineer while giving higher quality drawings, since the drafters are specialists while engineers were working as generalists. Drafters are also generally easier to hire since their skill set is more focused and requires less schooling (2 year technical degree vs. 4 year Bachelor of Science).

So, then I move to software development and I find that there are generally three roles in hands-on software development:

– Junior developer

– Developer

– Senior developer

Notice something – they are all developer roles. This means that the developer is responsible for all of the aspects of designing and developing the software as well as interfacing with program management. This, combined with the complexity of modern software, sets up projects for failure since the developers are overloaded right out of the starting gate. A lot of what I have read about solutions to this problem involve further decentralization and attempts to bring everyone up to the same level of awesome. This may work in environments that can hire the top talent in the field, but I do not think that this is a viable general solution. I propose that there are (at least) two roles in software development that, if recognized, and go a long way toward easing the pressure on developers.

Roles

Software Engineer

A software engineer is responsible for engineering the software product. This role is composed of developers that specialize in understanding the business requirements and can translate those requirements into a cohesive design that integrates well into the existing software landscape. This role would be responsible for deciding what language(s) the application would be written in, what application frameworks to use (if any) and generally become the go to person about how the application does (or should) work.

Programmer

The programmer takes the role of drafter in this model. They receive development tasks that are defined by the software engineer and they implement them. They are not necessarily aware of the overall system design, nor are they responsible for it. They are, however, highly specialized people who know how to implement a programming task with extremely high quality (including development of unit tests). They have to rely on the software engineer to help them answer relevant questions about how their current task integrates into the whole system, but they only need to know small pieces at a time.

Benefits

These two roles allow the engineer to work at a higher level in the system development space and stay there. They are not making system design decisions in the middle of writing a search algorithm. The programmer, on the other hand, doesn’t have to worry about where the whole system is going or how things are going to integrate together, they get their assignments from the engineer and serve as a force-multiplier to that role.

I think that this separation of responsibilities yields several benefits to the enterprise and the individual

1) The role of programmer doesn’t require the same skill set that and engineering role does.

I would expect the role of programmer to be easier to hire for since it emphasizes hard-skills. Additionally, this role would be more cost-effective for the hiring organization due to lower salary ranges.

2) The role of engineer can free top-talent to work on more projects

Instead of tying up a talented developer in the implementation of their design, the engineer can work on more projects at one time. This allows the scare pool of top developers to have a greater impact on the organization.

3) Codifies mentoring into organizational structure

Since many programmers might aspire to be engineers. This structure provides the opportunities for them to be coached by the most talented developers in the organization. Some programmers will be content to have that as their career (like many drafters just want to do drafting). Those programmers that wish to grow into an engineering role will have regular face-time with an existing engineer that can work with them to understand more about how their work fits into the big picture.

Summary

It is clear that not every developer is equal. Like all professions, a wide variety of talent and passion are represented by the software development community. Currently, many companies try to smooth this over by recognizing only the role of developer and segregating only as more or less experienced. I believe that the reality is more nuanced. Many developers love the upfront design of a system, but find the programming to be a chore. Many love the programming but lack the interest or ability to envision how to break large problems down into codeable solutions. A few love both aspects and perform well in both realms. By making a choice to separate these roles, software development will be open to more people and be more enjoyable as well.

 

Benefits and Costs of Using a Universal Join Table

Background

Last year, I was asked to design my first large software application. I was thrilled at the opportunity and humbled by the trust that my leaders were showing me since I had only fairly recently joined the company. Over the next few weeks, I was faced with many challenges as I struggled to understand the business problem that this application was trying to solve. In fact, that was the key problem – this business segment was dramatically underserved by software and was mostly paper driven. In spite of that, it was tapped as a major potential source of revenue growth and was allocated a sizable software budget for help realize that growth.

The challenge was that no one had a clear vision for what the next 12 – 24 months had in store for us. A project was already underway to digitize the paper-based system that was in-place, but we were being called upon to deliver software that increased the value proposition of the business, not just replace existing systems.

One Fateful Decision

My supervisor has a passion for databases and database design. He has many ideas, from the benign to the bizarre, about how databases could work, if only the underlying systems existed. One of the ideas that he had implemented on another system that I had worked on was the concept of separating the relationships between database tables from the data that they contained. The idea was that everything was linked through a single many-to-many table that contained two ID columns (a parent and child) as well as two discriminator fields that pointed back to the table that held the data for the objects. This allowed me to defer the decision about what parent-child objects were needed since we could create and destroy relationships in whatever way we wanted without every adjusting the database schema. Since I had no idea how data was going to be related to one another over time, and I didn’t know that this was supposedly a horrible idea, I decided to make this the core of how data related to each other in our database.

The Immediate Impact

Our application code was written in Java (built on WebSphere and using the Spring MVC framework), so our initial thought was to use JPA (the Java Persistence API) to build the object graphs for us. Unfortuately, JPA did not know how to handle our case and we had to turn to one of our developers to spin up a magic module that would handle the loading and saving of object graphs. This allowed us to work in Java without normally having to deal with the external join table (called “Item rel” for Item relationship table).

The persistence module mentioned above is not a piece of code to be approached lightly. Since it basically has to handle the parent-child portion of an ORM and work flawlessly, in a culture without a strong testing mind-set, the code is a bit…daunting. It does, however, work extremely well and is almost never touched even though the requirements on it are very high.

Performance Impact

There were strong concerns that Item Rel would kill the performance of the application. These came from two basic performance hit that this concept caused. First, without integral foreign keys, there was no way that we could implement eagerly loaded objects – every node in the object graph had to be inflated with its own call to the database (the only exception were collections, which could be pulled using a single request). Secondly, every loaded child took not one, but two database calls: one to Item Rel to find the child’s ID and then a call to the child table to load the object by ID. We all felt strongly that we had a ticking time-bomb on our hands and that Item Rel would eventually have to be phased out. My hope was that we would gain enough flexibility at the beginning of the project to justify the pain that would be experienced when this occured.

Recently, a few releases have been made and the business has started to express concern about application responsiveness. We all knew what it was, and started to steel ourselves to pay the debt that we have been accumulating. Before we got to work, one of the developers started to profile the application so that we could target the largest performance hits from Item Rel and dismantle those parts first. Here is what he found: none of the issues were traced back to Item Rel. Not a single one of the top ten performance problems were related to this strategy. They all came back to the basics of database performance – make sure you have the right indexes and tune your custom SELECT statements using query plan calculations.

How can this be?

I’ll be the first to admit that I’m not a database guy. I am much more comfortable working in JavaScript that PL/SQL, but I do have some ideas about why Item Rel is working for us.

1) Low demand. Our application is being hit on the order of thousands of times per hour. With modern databases, this simply isn’t a challenge to keep up, Throwing an additional factor of two or three on it due to Item Rel isn’t enough to stress them. This would, of course, not be true for a large e-commerce site, but it is true for us.

2) Databases are smart. I haven’t confirmed ths, but I have this mental model of the database: if there is one, highly indexed table that is being hit all the time, I would imagine that the database would optimize that table to stay in memory. Always. If this is true, then the additional SELECT call that hits Item Rel might take almost no time at all since it will be hitting an indexed table that is in memory anyway.

In short, databases are darn FAST and designed to operate and crazy high loads before slowing down. They are also really good at their job. Even when faced with a non-conentional strategy, they are good at reacting and giving a pretty amazing level of performance.

Conclusion

I would certainly not recommend that you run out and refactor your datbase schemas to match this model. However, I do think that this project has taught me once again that there is no one right way to solve a problem in software development. Assessing your unique problem in the context of the available technology will often yield solutions that are not quite in line with the norm. And that is okay…

Mixing synchronous and asynchronous functions in Dojo

Background

This week, I was presented with an interesting problem with a project that I designed a few months ago. The application is an tablet-based drawing application that allows the user to add free-drawn lines and various templates and stencils to describe a product that they would like to have us custom-make for them. The application is primarily a client-side web-application written with the Dojo toolkit with extensive use of custom widgets to provide functionality.

The design decouples the data of a drawable item from its actual representation in order to honor the separation of responsibilities and ensure that the application remains as maintainable as possible. When the application needs to draw something onto the screen, it calls a factory with the data object and receives make the correct renderer for that object.

The Problem

Until recently, everything has been working well except for a few bugs that were being sussed out during pilot roll-outs. While working on a bug-fix, a developer ran into a problem that appeared to be a race condition. When a drawing item is initiallly added to the canvas, it is highlighted by a bounding box. While this generally worked, the bounding box was consistently failing to render for stencils. This problem appeared to be worse in the pilot groups (remote to the data-center) than for developers (working in the same facility as the data-center).

It turns out that a race condition was taking place in which the stencil’s backing image was racing with the need to get the size of the image to generate the bounding box. Naturally, the remote facilities saw the issue more often because their network speeds were dramatically lower than the developer’s were experiencing. This raised an interesting problem: all of the rendering code was (mistakenly) written to be synchronous – the application assumed that drawing objects would always have a known size once created. However, since stencils have to be loaded from the server, the size is unknown until the image data is actually loaded.

The Investigation

The obvious solution was to make the stencil renderer work in an asynchronous manner. It would wait until the image was available and then provide the requested dimensions to all callers. However, this is a different path than the other renderers were using, since they could work synchronously with no errors. Changing everything to asynchronous execution was an option, but would force a much larger surface area of the applciation to be touched, dramatically increasing the possibiity of introducing errors.

The Solution

It turns out the, once again (this seems to happen all the time), the Dojo Toolkit comes to the rescue. There is a module called dojo/when that takes a function and a callback. It will execute the function and, ensure that the callback is called with the results. For a synchronous function, it will directly pass the result into the callback. For asynchronous calls, it will detect the promise, wait till the promise is resolved, and pass the resolved results into the callback. As a result, the calling code needed a minor refactor (to use dojo/when) and then the stencil renderer could do the “right thing” and handle the async call properly.

 

Transpiling SVG to JavaScript – part 2

I was recently working on a project that was rapidly approaching its deadline. A week and a half before it was due to be turned over for user-acceptence testing (UAT), a bug was submitted stating that the tester could not save. At this point in the project, bugs had dwindled down to be small, tweaky things like changing text, or modifying the workflow so I didn’t expect anything too dramatic here. Sometimes I’m very wrong…

My company has a native application that hosts a WebView, through which we deliver our content as web applications. The application that I was working on is a drawing application that will utilize some pretty edgy HTML5 and CSS3 technologies to attempt to deliver a near-native experience via a web app.

The Problem

The application is based on the canvas tag. It allows the user to add a combination of free-drawn lines and images onto the canvas to produce their desired image. Additionally, the user can reselect the items and manipulate them (i.e. translate, rotate, and scale) so that they fit together well. Originally, all of the images were in .PNG format. This worked well, except for the scaling operation would often leave the images fuzzy, or improperly rendered. This was especially true with the stencil images, which consist mostly of black and white line art.

To improve the rendering of the stencils, we elected to change to the .SVG vector graphic format. Since vector graphics are resolution independent, the image quality was always excellect. As an additional bonus, .SVG images tend to be smaller than .PNGs so we got a network load reduction for free.

Then the sky fell down.

It turns out that there is an issue with rendering .SVG images onto a canvas. As discussed here, many browsers won’t let you get the image data from a canvas after an .SVG has been rendered to it. Ever. There is no work around, it just doesn’t work. This has been corrected in many browsers, but the native browser for Android devices (pre-KitKat) have never been patched.

The Solution

In a previous post, we had parsed the original SVG image’s XML document and generated a representation of that document (called an Abstract Syntax Tree or AST) like this:

graph (1)

From here, we can convert the document into any format that we want. For this project, we want to get a Dojo module that creates an image that is a representation of the original SVG at whatever size we want.

The Renderer

A renderer, in this context, will take an AST and export it to a different format. To accomplish this, I created an SvgModuleRenderer that looks like this:

import com.sterling.custlibrary.converters.svgtokens.SvgRootToken;

public class SvgModuleRenderer {
    
    private SvgModuleInitializerRenderer svgModuleInitializerRenderer = new SvgModuleInitializerRenderer();
    private SvgGetBaseImageSizeFunctionRenderer svgGetBaseImageSizeFunctionRenderer =
            new SvgGetBaseImageSizeFunctionRenderer();
    private SvgSetImageSizeFunctionRenderer svgSetImageSizeFunctionRenderer = new SvgSetImageSizeFunctionRenderer();
    private SvgDrawingHelperFunctionRenderer svgDrawingHelperFunctionRenderer = new SvgDrawingHelperFunctionRenderer();
    private SvgGetImageFunctionRenderer svgGetImageFunctionRenderer = new SvgGetImageFunctionRenderer();
    
    private LineRenderer lineRenderer = new LineRenderer();
    
    public String render(SvgRootToken root) {
        String result = "";
        lineRenderer.increaseIndent();
        
        result += renderHeader();
        result += renderInitializers(root);
        result += renderGetImageFunction(root);
        result += renderDrawingHelperFunctions();
        result += renderSetImageSizeFunction();
        result += renderGetBaseImageSizeFunction();
        result += renderFooter();
        
        return result;
    }
    
    private String renderInitializers(SvgRootToken root) {
        String result = null;
        
        result = svgModuleInitializerRenderer.render(root, lineRenderer);
        
        return result;
    }
    
    private String renderGetImageFunction(SvgRootToken root) {
        String result = null;
        
        result = svgGetImageFunctionRenderer.render(root, lineRenderer);
        
        return result;
    }
    
    private String renderDrawingHelperFunctions() {
        String result = null;
        
        result = svgDrawingHelperFunctionRenderer.render(lineRenderer);
        
        return result;
    }
    
    private String renderSetImageSizeFunction() {
        String result = null;
        
        result = svgSetImageSizeFunctionRenderer.render(lineRenderer);
        
        return result;
    }
    
    private String renderGetBaseImageSizeFunction() {
        String result = null;
        
        result = svgGetBaseImageSizeFunctionRenderer.render(lineRenderer);
        
        return result;
    }
    
    private String renderHeader() {
        String result = "";
        
        result += lineRenderer.render("define([], function() {");
        
        result += lineRenderer.renderEmptyLine();
        
        return result;
    }
    
    private String renderFooter() {
        String result = "";
        
        lineRenderer.increaseIndent();
        result += lineRenderer.renderEmptyLine();
        
        result += lineRenderer.render("return {");
        lineRenderer.increaseIndent();
        ;
        result += lineRenderer.render("getImage: getImage,");
        result += lineRenderer.render("setImageSize: setImageSize,");
        result += lineRenderer.render("getBaseImageSize: getBaseImageSize");
        lineRenderer.decreaseIndent();
        result += lineRenderer.render("};");
        
        lineRenderer.decreaseIndent();
        result += lineRenderer.render("});");
        lineRenderer.decreaseIndent();
        
        return result;
    }
}

It, more or less, follows the template pattern where it coordinates other renders and tells them when to render their content, providing them whatever data or helper methods are required. The major pieces of the module are:

  • The header
  • The initializers
  • The “getImage” function
  • The drawing helper functions
  • The “setImageSize” function
  • The “getBaseImageSizeFunction”
  • The footer

The Header

This renderer exports static text that is common to every module we are going to render. It is very simple and simply exports the following code:

define([], function() {

The Initializers

When Dojo loads a module, it executes the function that is provided by the second argument to the define function (added in the header section above). Whatever is returned from this function is the public definition of the module. We can take advantage of the fact that we are guaraneteed to have this function executed once, and only once, to provide some initialization logic. That is what the Initializer Renderer does. It declares the variables that are going to be used by the public methods, but aren’t exported from the module for public consumption. The output of this section looks like this:

var canvas = document.createElement('canvas');
var ctx = canvas.getContext('2d');
var currentX = 0;
var currentY = 0;
var baseWidth = {root.getWidth() + 4};
var baseHeight = {root.getHeight() + 4} 
var width = baseWidth;
var height = baseHeight;
var ratio = width / baseWidth;

The only dynamic code that we find here is the injection of the SVG images base height and width. These are used in the scaling operations to determine where the final points will be placed when rendering the output.

The “getImage” function

The getImage() function represents the core functionality of the module. This is the function that is responsible for generating the image at the currently requested resolution. It is also the longest method, by far, when rendered since each rendering instruction will be added here. The code (without rendering instructions) looks like this:

function getImage() {
  canvas.height = height;
  canvas.width = width;
  ctx.translate(scale(-1 * {root.getXOffset()}), scale(-1 * {root.getYOffset()}));
}
.
.
.
<<rendering instructions>>>
.
.
.
  var result = new Image();
  result.src = canvas.toDataURL();
  return result;
}

The method starts by assigning the canvas to the currently required height and width (simultaneously clearing the canvas) and setting up an offset for the drawing. This offset is requried to handle SVG images whose viewBox is does not start at (0, 0). Next all of the rendering instructions are added (discussed below) and then the resultant image is generated and returned.

The rendering of the instructions is done by iterating through the AST  and passing each token, based on its type, to a method that knows how to render the associated JavaScript code. As an example, let’s consider a <circle> element. The original element is normally in the form:

<circle  cx="75" cy="100" r="50" opacity="0.5"/>

The exported JavaScript for this tag will look like this:

ctx.beginPath(); 
ctx.arc(scale(75 + 2), scale(100 + 2), scale(50), 0, Math.PI * 2); 
ctx.globalAlpha = "0.5"; 
ctx.fillStyle = "#fff"; 
ctx.fill();

Every instruction will be processed using a similar strategy until the entire document is processed.

The drawing helper functions

These functions are present to streamline the code in the getImage() function as much as possible. The provide helpers for path operations, like move and bezier curve so that the getImage() function can make a single call and all of the steps to make that happen. For example, the getImage() function might have a statement:

curveTo({cp1x}, {cp1y}, {cp2x}, {cp2y}, {x}, {y}, {isAbsolute})

this would call the curveTo helper function that looks like this:

function curveTo(cp1X, cp1Y, cp2X, cp2Y, x, y, isAbsolute) {
	
	cp1X = isAbsolute? (scale(cp1X) + scale(2)) : scale(cp1X) + currentX;
	cp1Y = isAbsolute? (scale(cp1Y) + scale(2)) : scale(cp1Y) + currentY;
	cp2X = isAbsolute? (scale(cp2X) + scale(2)) : scale(cp2X) + currentX;
	cp2Y = isAbsolute? (scale(cp2Y) + scale(2)) : scale(cp2Y) + currentY;
	x = isAbsolute? (scale(x) + scale(2)) : scale(x) + currentX;
	y = isAbsolute? (scale(y) + scale(2)) : scale(y) + currentY;

	ctx.bezierCurveTo(cp1X, cp1Y, cp2X, cp2Y, x, y);
	
	currentX = x;
	currentY = y;
}

As you can see, there is a lot involved in getting the correct values for the points, rendering the curve, and then updating the current point on the canvas for future operations that are relative to the current point.

The “setImageSize” function

The setImageSize function is another static function that allows the consuming code to request getImage() to generate an image at a different size. It exports this code:

function setImageSize(newWidth, newHeight) {
  height  = newHeight;
  width = newWidth;
  ratio = newWidth / newHeight; 
}

The “getBaseImageSize” function

This is another simple function that tells allows the consuming code to find out how large the image wants to render itself as a “native” size.

function getBaseImageSize() {
  var result = {
    height: baseHeight,
    width: baseWidth
  };

  return result;
}

The footer

Finally, the module renderer will add the footer to the module. This contains the code that returns the object that defines the module’s public API and the required code to close off the module itself.

  return {
    getImage: getImage,
    setImageSize: setImageSize,
    getBaseImageSize: getBaseImageSize,
  };

});

h

Results

After all of this work, we can take an SVG document that looks like this:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 93 78" enable-background="new 0 0 93 78">
  <g stroke="#231F20" stroke-miterlimit="10">
    <path fill="#fff" d="M4.5 8.5h83v61h-83z"/>
    <path fill="#D1D3D4" d="M10.5 8.5h10v61h-10zM71.5 8.5h10v61h-10z"/>
    <path fill="#fff" d="M.5.5h11v77h-11zM81.5.5h11v77h-11zM11.3.4l9.8 7.7M82.1.4l-9.8 7.7M11.3 77.3l9.8-7.7M72.3 69.6l9.8 7.7"/>
    <path fill="#BCBEC0" d="M15.5 8.5h62v61h-62z"/>
  </g>
</svg>

Which renders this image:

bar

Is now generated by a module with this code:

    define([], function() {

        var canvas = document.createElement('canvas');
        var ctx = canvas.getContext('2d');
        var currentX = 0;
        var currentY = 0;
        var baseWidth = 97.0;
        var baseHeight = 82.0;
        var width = baseWidth;
        var height = baseHeight;
        var ratio = width / baseWidth;

        function getImage() {
            canvas.height = height;
            canvas.width = width;
            ctx.translate(scale(-1 * 0.0), scale(-1 * 0.0));
            ctx.beginPath();
            moveTo(4.5, 8.5, true);
            lineTo(83.0, 0.0, false);
            lineTo(0.0, 61.0, false);
            lineTo(-83.0, 0.0, false);
            ctx.closePath();
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#fff';
            ctx.fill();

            ctx.beginPath();
            
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#fff';
            ctx.fill();


            ctx.beginPath();
            moveTo(10.5, 8.5, true);
            lineTo(10.0, 0.0, false);
            lineTo(0.0, 61.0, false);
            lineTo(-10.0, 0.0, false);
            ctx.closePath();
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#D1D3D4';
            ctx.fill();

            ctx.beginPath();
            
            moveTo(71.5, 8.5, true);
            lineTo(10.0, 0.0, false);
            lineTo(0.0, 61.0, false);
            lineTo(-10.0, 0.0, false);
            ctx.closePath();
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#D1D3D4';
            ctx.fill();

            ctx.beginPath();
            
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#D1D3D4';
            ctx.fill();


            ctx.beginPath();
            moveTo(0.5, 0.5, true);
            lineTo(11.0, 0.0, false);
            lineTo(0.0, 77.0, false);
            lineTo(-11.0, 0.0, false);
            ctx.closePath();
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#fff';
            ctx.fill();

            ctx.beginPath();
            
            moveTo(81.5, 0.5, true);
            lineTo(11.0, 0.0, false);
            lineTo(0.0, 77.0, false);
            lineTo(-11.0, 0.0, false);
            ctx.closePath();
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#fff';
            ctx.fill();

            ctx.beginPath();
            
            moveTo(11.3, 0.4, true);
            lineTo(9.8, 7.7, false);
            moveTo(82.1, 0.4, true);
            lineTo(-9.8, 7.7, false);
            moveTo(11.3, 77.3, true);
            lineTo(9.8, -7.7, false);
            moveTo(72.3, 69.6, true);
            lineTo(9.8, 7.7, false);
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#fff';
            ctx.fill();


            ctx.beginPath();
            moveTo(15.5, 8.5, true);
            lineTo(62.0, 0.0, false);
            lineTo(0.0, 61.0, false);
            lineTo(-62.0, 0.0, false);
            ctx.closePath();
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#BCBEC0';
            ctx.fill();

            ctx.beginPath();
            
            ctx.globalAlpha = 1.0;
            ctx.lineWidth = '1'
            ctx.strokeStyle = '#231F20';
            ctx.stroke();
            ctx.fillStyle = '#BCBEC0';
            ctx.fill();


            var result = new Image();
            result.src = canvas.toDataURL();
            return result;
        }

        function curveTo(cp1X, cp1Y, cp2X, cp2Y, x, y, isAbsolute) {
            cp1X = isAbsolute? (scale(cp1X) + scale(2)) : scale(cp1X) + currentX;
            cp1Y = isAbsolute? (scale(cp1Y) + scale(2)) : scale(cp1Y) + currentY;
            cp2X = isAbsolute? (scale(cp2X) + scale(2)) : scale(cp2X) + currentX;
            cp2Y = isAbsolute? (scale(cp2Y) + scale(2)) : scale(cp2Y) + currentY;
            x = isAbsolute? (scale(x) + scale(2)) : scale(x) + currentX;
            y = isAbsolute? (scale(y) + scale(2)) : scale(y) + currentY;

            ctx.bezierCurveTo(cp1X, cp1Y, cp2X, cp2Y, x, y);

            currentX = x;
            currentY = y;
        }

        function lineTo(x, y, isAbsolute) {
            x = isAbsolute? (scale(x) + scale(2)) : scale(x) + currentX;
            y = isAbsolute? (scale(y) + scale(2)) : scale(y) + currentY;

            ctx.lineTo(x, y);

            currentX = x;
            currentY = y;
        }

        function moveTo(x, y, isAbsolute) {
            if (isAbsolute) {
                currentX = (scale(x) + scale(2));
                currentY = (scale(y) + scale(2));
            } else {
                currentX += scale(x);
                currentY += scale(y);
            }
            ctx.moveTo(currentX, currentY);
        }

        function drawText(text, font, textSize, fillStyle, strokeStyle, x, y, isAbsolute) {
            if (isAbsolute) {
                currentX = (scale(x) + scale(2));
                currentY = (scale(y) + scale(2));
            } else {
                currentX += scale(x);
                currentY += scale(y);
            }

            ctx.font = "'" + textSize + " " + font + "'";

            if (fillStyle) {
                ctx.fillStyle = fillStyle;
                ctx.fillText(text, currentX, currentY);
            }

            if (strokeStyle) {
                ctx.strokeStyle = strokeStyle;
                ctx.strokeText(text, currentX, currentY);
            }
        }

        function scale(value) {
            return (value) * ratio;
        }

        function setImageSize(newWidth, newHeight) {
            height = newHeight;
            width = newWidth;
            ratio = newWidth / baseWidth;
        }

        function getBaseImageSize() {
            var result = {
                height: baseHeight, 
                width: baseWidth
            };
            return result;
        }

        return {
            getImage: getImage,
            setImageSize: setImageSize,
            getBaseImageSize: getBaseImageSize
        };
    });

Conclusion

These posts have been about how we can take an SVG XML document and convert it to a Dojo module. While this certainly isn’t the prettiest solution, and promises to be challenging to manage over time, it is possible. In the end, we have a tool that reimplements enough of the SVG standard to allow the project to continue and, at the end of the day, that is sometimes the best we can hope for.

One final comment is that the techniques used here are actually pretty common for a lot of different tasks to convert data from one domain to another. The use of a parser to generate tokens, tokens to generate an abstract syntax tree and then using the AST to generate the desired output gives a nice workflow and allows code to remain maintainable as the sophistication of the converter increases.

Transpiling SVG to JavaScript, part 1

I was recently working on a project that was rapidly approaching its deadline. A week and a half before it was due to be turned over for user-acceptence testing (UAT), a bug was submitted stating that the tester could not save. At this point in the project, bugs had dwindled down to be small, tweaky things like changing text, or modifying the workflow so I didn’t expect anything too dramatic here. Sometimes I’m very wrong…

My company has a native application that hosts a WebView, through which we deliver our content as web applications. The application that I was working on is a drawing application that will utilize some pretty edgy HTML5 and CSS3 technologies to attempt to deliver a near-native experience via a web app.

The Problem

The application is based on the canvas tag. It allows the user to add a combination of free-drawn lines and images onto the canvas to produce their desired image. Additionally, the user can reselect the items and manipulate them (i.e. translate, rotate, and scale) so that they fit together well. Originally, all of the images were in .PNG format. This worked well, except for the scaling operation would often leave the images fuzzy, or improperly rendered. This was especially true with the stencil images, which consist mostly of black and white line art.

To improve the rendering of the stencils, we elected to change to the .SVG vector graphic format. Since vector graphics are resolution independent, the image quality was always excellect. As an additional bonus, .SVG images tend to be smaller than .PNGs so we got a network load reduction for free.

Then the sky fell down.

It turns out that there is an issue with rendering .SVG images onto a canvas. As discussed here, many browsers won’t let you get the image data from a canvas after an .SVG has been rendered to it. Ever. There is no work around, it just doesn’t work. This has been corrected in many browsers, but the native browser for Android devices (pre-KitKat) have never been patched.

The Solution

When I discovered this, the project team had a very bad day. We were 9 days away from releasing the application for testing and we had no option of delaying. Our first thought was to move off of the native application and move to running the app on Chrome, but this introduced many issues (how to get to Chrome, how to get back when done, the Chrome browser wasn’t a testing target so the quality was unknown, …). As we were mentally gearing up for this painful transition, someone from another team mentioned the idea of writing a transpiler that interpreted the .SVG file (which is just an XML document afterall) and dynamically generate a DOJO module that is capable of generating an image at whatever size is requested.

Research

SVG documents are actually XML files that use a special namespace that describes all of the instructions that are used to generate the image. For more information about this standard, I would recommend visiting the w3.org site describing the standard. For this post, we will be using the following document as an example:

<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1600" height="900" viewBox="0 0 1600 900">
	<g>
		<g>
			<g fill="#808285" font-family="'Muli'" font-size="12">
				<text transform="translate(765 650)">TEXT LABEL</text>
			</g>
		</g>
		<g>
			<path fill="#fff" stroke="#231F20" stroke-miterlimit="10" d="M934 449.9c0 73.6-59.7 133.3-133.3 133.3s-133.3-59.7-133.3-133.3 59.7-133.3 133.3-133.3 133.3 59.7 133.3 133.3zm-133.3-111.4c-61.5 0-111.4 49.9-111.4 111.4s49.9 111.4 111.4 111.4 111.4-49.9 111.4-111.4-49.9-111.4-111.4-111.4z"/>
		</g>
	</g>
</svg>

This document produces an image that looks like this:

svg_image

The Process

The creation of a transpiler consists of three general steps:

  1. Parse the source document into useable text tokens
  2. Create an object graph that descibes the document in a presentation-agnostic manner
  3. Render the object graph created in step (2) into the new format

Separating the workflow out in this manner gives a much more flexible and maintainable result in the end. Further, since the goal was only to implement a sub-set of the SVG standard, the resulting code needed to be extensible to handle future needs.

The Parser

Examining the svg document above, two sources of useful informaton are revealed: the tags (<svg>, <g>, …) and the attributes on those tags (e.g. fill=”#808285″). Most of these elements could be handled the same way, with the exception of the <path> element. Its “d” attribute holds a lot of information and will require special handling.

In order to handle the different tags, the factory method is used to obtain a parser that is specialized to each tag. For example, the svg (<svg> tag has the viewBox, width, and height attributes, whereas the group (<g>) tag can have attributes such as fill, font-family, and font-size that need to be captured. Additionally, each parser needs to have access to the factory in case it has child nodes that need to be handled as well. An example of one of the parsers follows:

public class SvgGroupParser implements SvgParser {
    
    private SvgParserFactory svgParserFactory;
    
    public SvgGroupParser(SvgParserFactory svgParserFactory) {
        this.svgParserFactory = svgParserFactory;
    }
    
    @Override
    public void parse(Node node, SvgToken parent) throws Exception {
        if (node != null) {
            SvgGroupToken token = new SvgGroupToken(parent);
            parent.addChild(token);
            if (node.hasAttributes()) {
                NamedNodeMap attributes = node.getAttributes();
                for (int i = 0; i < attributes.getLength(); i++) {
                    Node attribute = attributes.item(i);
                    if (attribute.getNodeName().equalsIgnoreCase("fill")) {
                        token.setFillStyle(attribute.getNodeValue());
                    } else if (attribute.getNodeName().equalsIgnoreCase("stroke")) {
                        token.setStrokeStyle(attribute.getNodeValue());
                    } else if (attribute.getNodeName().equalsIgnoreCase("stroke-width")) {
                        token.setStrokeWidth(attribute.getNodeValue());
                    } else if (attribute.getNodeName().equalsIgnoreCase("font-family")) {
                        token.setFontFamily(attribute.getNodeValue().replace("'", ""));
                    } else if (attribute.getNodeName().equalsIgnoreCase("font-size")) {
                        token.setFontSize(Double.parseDouble(attribute.getNodeValue()));
                    }
                }
            }
            
            if (node.hasChildNodes()) {
                for (int i = 0; i < node.getChildNodes().getLength(); i++) {
                    Node child = node.getChildNodes().item(i);
                    SvgParser parser = svgParserFactory.getParser(child);
                    if (parser != null) {
                        parser.parse(child, token);
                    }
                }
            }
        }
    }
}


The parse method is the key to this class. It expects an XML Node object (assumed to be a group node in this case) and the node’s parent object. The method creates the target object (an SvgGroupToken) and then populates it with anything that is interesting in the node’s attribute list. After obtaining that information, the method looks for any children and asks the SvgParserFactory to get a parser for each child then and process them. The end result of this is an object graph that looks very similar to the original XML document, but in a format that is easier to work with in Java.

Parsing the Path node

Path nodes include a special attribute “d” that includes all of the commands that describe the path itself. Due to the complexity of this attribute, another level of parsing is required. In this case, a special kind of SvgToken that also implements the SvgPathElement interface is used.

 

public interface SvgPathElement {
    void addPoint(double point);
    
    boolean hasEnoughPoints();
    
    void setIsAbsolute(boolean value);
    
    double getX();
    
    double getY();
    
}

The generation of these objects is the responsibility of the SvgPathParser.

public class SvgPathParser implements SvgParser {
    
    private SvgPathElementFactory svgPathElementFactory;
    
    public SvgPathParser(SvgPathElementFactory svgPathElementFactory) {
        this.svgPathElementFactory = svgPathElementFactory;
    }
    
    @Override
    public void parse(Node node, SvgToken parent) throws Exception {
        SvgPathToken token = new SvgPathToken(parent);
        parent.addChild(token);
        if (node.hasAttributes()) {
            NamedNodeMap attributes = node.getAttributes();
            for (int i = 0; i < attributes.getLength(); i++) {
                Node attribute = attributes.item(i);
                if (attribute.getNodeName().equalsIgnoreCase("fill")) {
                    token.setFillStyle(attribute.getNodeValue());
                } else if (attribute.getNodeName().equalsIgnoreCase("stroke")) {
                    token.setStrokeStyle(attribute.getNodeValue());
                } else if (attribute.getNodeName().equalsIgnoreCase("stroke-width")) {
                    token.setStrokeWidth(attribute.getNodeValue());
                } else if (attribute.getNodeName().equalsIgnoreCase("d")) {
                    parseDirections(token, attribute.getNodeValue());
                }
            }
        }
    }
    
    private void parseDirections(SvgPathToken parent, String directions) throws Exception {
        List<String> rawTokens = tokenizeDirections(directions);
        
        SvgPathElement currentElement = null;
        String currentCommand = null;
        for (int i = 0; i < rawTokens.size(); i++) {
            String s = rawTokens.get(i);
            if (isString(s)) {
                if (currentElement != null) {
                    if (!currentElement.hasEnoughPoints()) {
                        throw new Exception("Unexpected command, not enough points in previous path element");
                    }
                }
                
                currentCommand = s;
                currentElement = svgPathElementFactory.getPathElement(currentCommand, parent, currentElement);
                parent.addPathElement(currentElement);
            } else {
                if (currentElement != null) {
                    if (currentElement.hasEnoughPoints()) {
                        currentElement = svgPathElementFactory.getPathElement(currentCommand, parent, currentElement);
                        parent.addPathElement(currentElement);
                    }
                    currentElement.addPoint(Double.parseDouble(s));
                } else {
                    throw new Exception("Unexpected point value: no active path element to add point to");
                }
            }
        }
    }
    
    private boolean isString(String value) {
        boolean result = false;
        
        result = value.matches("[a-zA-Z]*");
        
        return result;
    }
    
    private List<String> tokenizeDirections(String directions) throws Exception {
        List<String> result = new ArrayList<String>();
        boolean inNumber = false;
        boolean foundDecimal = false;
        String inProcessToken = "";
        
        for (int i = 0; i < directions.length(); i++) {
            char charNow = directions.charAt(i);
            if ((charNow >= 'a' && charNow <= 'z') ||
                    (charNow >= 'A' && charNow <= 'Z')) {
                if (inNumber) {
                    result.add(inProcessToken);
                }
                result.add(String.valueOf(charNow));
                inProcessToken = "";
                inNumber = false;
                foundDecimal = false;
            } else if ((charNow >= '0' && charNow <= '9')) {
                if (inNumber) {
                    inProcessToken += String.valueOf(charNow);
                } else {
                    inProcessToken = String.valueOf(charNow);
                }
                inNumber = true;
                foundDecimal = foundDecimal; // don't change state
            } else if (charNow == '.') {
                if (foundDecimal) {
                    result.add(inProcessToken);
                    inProcessToken = String.valueOf(charNow);
                } else {
                    inProcessToken += String.valueOf(charNow);
                }
                inNumber = true;
                foundDecimal = true;
            } else if (charNow == '-') {
                if (inNumber) {
                    result.add(inProcessToken);
                }
                inProcessToken = String.valueOf(charNow);
                inNumber = true;
                foundDecimal = false;
            } else if (charNow == ' ') {
                if (inNumber) {
                    result.add(inProcessToken);
                }
                inProcessToken = "";
                inNumber = false;
                foundDecimal = false;
            } else {
                throw new Exception("Unrecognized token found: '" + String.valueOf(charNow) + "'");
            }
        }
        
        if (inProcessToken != null && inProcessToken.length() > 0) {
            result.add(inProcessToken);
        }
        
        return result;
    }
}

This class uses the private parseDirections method to handle the creation of the SvgPathElements. It starts by calling another private method, tokenizeDirections, to take the path’s raw string and break it up into tokens that represent a command or a coordinate. The parseDirections method then proceeds to convert those raw tokens into SvgPathElements by generating a path element (via the SvgPathElementFactory) and then the required number of coordinates to that path element. Since each path element has a different number of required points, each one has to let the parser know how many it needs. This is done using the hasEnoughPoints method. The parser will check that method and, if it returns false, pass the next point to the element using the addPoint method. Each SvgPathElement is responsible for tracking the points that it needs and assigning provided points to the correct place.

public class SvgMoveToken extends SvgToken implements SvgPathElement {
    public SvgMoveToken(SvgToken parent) {
        super(parent);
    }
    
    private boolean isAbsolute = false;
    private double x;
    private double y;
    private int currentPoint = 0;
    
    public boolean isAbsolute() {
        return isAbsolute;
    }
    
    public void setIsAbsolute(boolean isAbsolute) {
        this.isAbsolute = isAbsolute;
    }
    
    public void addPoint(double point) {
        switch (currentPoint) {
        case 0:
            x = point;
            break;
        case 1:
            y = point;
        }
        
        ++currentPoint;
    }
    
    public double getX() {
        return x;
    }
    
    public double getY() {
        return y;
    }
    
    public boolean hasEnoughPoints() {
        return currentPoint == 2;
    }
}

The SvgMoveToken (above) needs two points to function correctly (the x and y coordinate to move to). The hasEnoughPoints method then just needs to check to see if two points have been assigned before returning true. The addPoint method follows the SVG standard in that the first point after a move command is the x coordinate and the second is the y. Therefore, it uses how many points have already been assigned to decide where to store the new number.

Half way there

At this point, the parsers have complete examing the original document and converted it into an abstract syntax tree. This tree looks something like this (trimmed in places for brevity):

graph

 

Wall coloring experiment

My wife and I have been talking about letting our girls choose new paint colors for their rooms lately. As you might expect, young girls have a lot of ideas about what would look “perfect”. Now, don’t get me wrong: I love my girls, and I don’t mind painting their rooms, but I was having horrible visions of repainting 7 times before we finally found an acceptable mix of “but I want it” and “what will the neighbors say”.

GIMP to the rescue

I have been using GIMP for years, and love it. I know that it isn’t as flashy as some other image editing software, but it is solid, open-source, and perfect for most of my needs. I decided to try to take some pictures of my girls’ rooms and virtually repaint them in GIMP to give all of us an idea about what the new room colors would look like.

Preparation

I started by taking a picture of my daughters room from several angles. I then chose one that I thought would give a good representation of how the new color scheme would look with her existing furnishings.

base

Next, we need to separate the sections that we are going to repaint from the bits that are going to stay. For that, I used GIMP’s “Quick Mask” tool to allow me to “paint” the region that I wanted to select. Below you see this as the area that is not overlayed with red.

highlight

With the walls and ceiling highlighted, it is trivial to cut them out and paste them onto a new layer. This yields two layers with the following content:

backgroundforeground

The next step is to repeat the process between the wall and ceiling to allow them to easily be painted different colors. I then converted the base color of the walls and ceiling into transparent black. What this left behind was a translucent image where the color was gone, but the grays of the shadows remained. Notice the bright area in the middle of the ceiling, that is that bright spot cast by the room light.

wall_shadows   ceiling_shadows-1

The result

At this point, almost all of the real work is done, and it is time for the fun. By selecting the shadowed areas, I could apply “paint” to the walls and ceiling separately and iterate really fast through wall colors. The images that you see below took less than an hour to generate. The only thing that I added was another layer on top of both the walls and ceilings that applied a gray overlay. I thought that this was necessary since many of the images looked too vibrant for real life. Adding the gray desaturated the colors a bit more and made them look more realistic (compare the rainbow to the walls and I think you’ll see what I mean).

Conclusion

Overall, I am very happy with how much this helped my whole family to think about what the room’s might look like when freshly painted. I definitely think that this is going to be a permanent step in my future painting projects.

HTML5 Canvas and CSS Transforms

I am currently working on a project that centers around the ability to create a drawing on a tablet device. The requirements are for a full-featured drawing tool, so each drawing element needed to be able to be manipulated after being placed. Additionally, the drawing needed to have zoom and pan functionality in order to allow details to be added more easily. In the initial implementation, all of the drawing and manipulating was handled in pure JavaScript. While this met the technical requirements, the bulk of the target devices had unacceptable performance due to hardware limitations. In order to address these issues, the rendering pipeline was changed from JavaScript only to a hybrid of CSS transforms and JavaScript.

Manipulating a drawing

The requirements of the application state that the drawing itself should be able to be manipulated by panning and zooming. Each individual drawing element is required to be able to be translated, scaled,and rotated.

pan_thumbzoom_thumbtranslate_thumbscale_thumbrotate_thumb

My first attempts at doing this involved trying to compose all of the transformations into a single set of translate, rotate, and scale directives. I quickly found that this is led to a lot of complicated formulas that didn’t promise to be easy to understand or maintain. Eventually, I discovered that that the CSS transform attribute will actually do the math for me. Thus, I built up the final transformation attributes to be:

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

As you can see, there are a lot of pieces here, but taken individually, they are each pretty clear.

transform-origin: {elementCenterX}px {elementCenterY}px

In general, there are two sets of transformations that are being applied to the canvas: the local manipulation of an element (to generate the translate, scale, and rotate behaviors), and the global manipulate of the drawing canvas (to generate pan and zoom). These transformation happen at different origin points – about the element center (for the first set) and the upper left corner of the drawing for the second. Since the local transformations are more complicated, they will be handled first and thus the transformation’s origin point needs to be put at the elements center via this attribute. Later, we will have to make some corrections to make the global pan and zoom render correctly.

translateZ(0px)

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

This directive is there to force the browser to render this element on the GPU (ref). This is helpful because all of the other transformations are in a single plane. Some browsers will try to have the CPU render these transformations which can leave the interaction feeling very choppy with a poor framerate. The translateZ directive warns the browser that you might be moving this through 3D space. At this point, almost all browsers decide that the GPU must be involved.

While this did give a dramatic improvement in performance, older browsers have struggled to render the browser window correctly. This was evidenced by the transformed elements “flickering” as the were manipulated. I believe that this is caused by having the same pixels being rendered by the CPU (for most of the browser content) and the GPU (for the transformed elements). More recent browsers and devices (less than 2 year old) did not seem to have this issue.

rotate_thumb

rotate({elementRotation}deg)

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

The rotate directive causes the entire element to rotate clockwise a specified number of degrees. Since the transformation origin is located at the element’s center, the element appears to rotate about its origin.

scale_thumb

scale({elementScale})

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

This scale directive causes the entire element to expand or contract about the transformation origin. This creates the scaling effect for the element that is being manipulated.

zoom_thumb

translateX({originCorrectionX}px) translateY({originCorrectionY}px)

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

At this point, the element has been rotated and scaled to its final orientation. The next step is to apply the global zoom to the drawing, however, there is a bit of a problem. As a element is scaled, it tends to grow toward or away from the transformation origin. In the case of the scaling operation, this resulted in the element growing or shrinking about its center since that is where the transformation-origin is. Unfortunately, this is the wrong location for the global zoom (whose render origin is in the upper left corner). If this isn’t corrected, then the drawing will tend to “pull away” from that corner and so a pair of corrective translations is applied. These corrections are the only ones that aren’t dependent on user input: they are derived from the other transformations that are present as follows:

originCorrectionX = – (1 – zoomLevel) * elementCenterX
originCorrectionY = – (1 – zoomLevel) * elementCenterY

scale({zoomLevel})

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

With the correction above, the global zoom level can be applied. One thing to note is the order that these directives were applied in. By adding the correction before the zoom, the values are still at the same scale as the rest of the page (the previous scale directive is local to the element, not the drawing itself). This keeps the math a little simpler.

translate_thumb

translateX({elementTranslateX}px) translateY({elementTranslateY}px)

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

The next set of transformations apply the translation to the element. Since this translation is expected to be done at the global zoom level, the are applied after the zooming directive. Fortunately, translation is not affected by the transform-origin, so no corrections are required.

pan_thumb

translateX({panX}px) translateY({panY}px

transform-origin: {elementCenterX}px {elementCenterY}px
transform: translateZ(0px) rotate({elementRotation}deg) scale({elementScale}) translateX({originCorrectionX}px) translateY({originCorrectionY}px) scale({zoomLevel}) translateX({elementTranslateX}px) translateY({elementTranslateY}px) translateX({panX}px) translateY({panY}px

The final directives apply the pan values to move the entire drawing. Technically, these could have been combined with the translate directives above, but I think that it is more clear when the are separated, especially with descriptive comments in the code that describe where where set of directives is coming from.

Conclusion

Overall, I am extremely pleased with how much the CSS transform attribute helped to improve the performance of the application. Also, while the transform attribute seems very complicated, it is made up of very logical, easily derived elements that has turned out to be more understandable and maintainable than the previous JavaScript only solution.

TypeScript and Dojo, part 2

Over the years, I have used many different frameworks and libraries that have tried to make JavaScript more consitent and and powerful. From VanillaJS, to jQuery, to AnglarJS, I took many of them out for a spin. One of the first frameworks that I used, however, was a little one known as dojo.

There are three things that you  learn about dojo:

  • it is very powerful
  • it can be tricky to learn
  • it doesn’t play well with others

This last point means that dojo apps are typically written the ‘dojo’ way. That is to say that the modules are created using dojo’s version of AMD, interfaces are created using its UI framework, unit tests are created using its testing framework, and “classes” are created using its class system. The amazing thing about that statement is that dojo comes will all of these capabilities right out of the box. Unfortunately, this high level of integration and functionality does not integrate into the workflow of a lot of the technologies in the larger JavaScript ecosystem.

One technology that I have been seriously considering recently is TypeScript. Both of these technologies are designed to make large client-side applications easier to build and maintain: dojo brings a large amount of infrastructure support for building multi-layer browser applications and TypeScript brings strong type support that help prevent typos, refactor-induced bugs, etc.

In order to bring these two technologies, two things needed to be done:

  1. Type definitions had to be generated for the dojo API
  2. A method had to be developed to get the dojo and TypeScript module system to work together

Using TypeScript and Dojo together

Basic Usage

A normal dojo module might look something like this:

 define(['dojo/request', 'dojo/request/xhr'],
     function (request, xhr) {
         ...
     }
 );

When using the TypeScript, you can write the following:

 define(['dojo/request', 'dojo/request/xhr'],
     function (request: dojo.request,
         xhr: dojo.request.xhr) {
         ...
     }
 );

Inside of the define variable, both request and xhr will work as the functions that come from Dojo, only they are strongly typed.

Advanced Usage

Creating a TypeScript module with Dojo’s AMD loader

Dojo and TypeScript both use different and conflicting class semantics. This causes some issues when trying to create custom class modules that are strongly typed in other modules. The following technique is presented as A solution to the problem, but not necessarily the best one. Other ideas a welcomed!

Using pure JavaScript, a class that has a base class and mixins can be defined in Dojo as follows:

 define(['dojo/_base/declare', 'dijit/_WidgetBase', 'dijit/_TemplatedMixin', 'dojo/request'], 
    function(dojoDeclare, _WidgetBase, _TemplatedMixin, request) {
        var Foo = dojoDeclare([_WidgetBase, _TemplatedMixin], {
            templateString: '<div>Hello TypeScript</div',

            message: '',

            sayMessage: function() {
                alert(this.message);
            },

            getServerInfo: function() {
                request.get('http://dojoAndTypeScriptTogetherAtLast.html', function(data) {
                    console.log(data);
                });
            }
        });

        return Foo;
    }
 );

The goal is to be able to describe the Foo class in a way that TypeScript can recognize, but also works with Dojo. This requires some hacks that will be introduced and explained as the problems are discovered.

The first challenge that we run into is how to define the class. We will define that using standard TypeScript semantics as follows:

  module App {
    export class Foo extends dijit._WidgetBase implements dijit._TemplatedMixin {
        constructor(public templateString= "<div>Hello TypeScript</div>",
            public message= "") {
            super();
        }

        sayMessage() {
            alert(this.message);
        }

        getServerInfo() {
            request.get("http://dojoAndTypeScriptTogetherAtLast.html", (data: string) => {
                console.log(data);
            });
        }

    }
 }

This class is identical to the standard Dojo method, except that it is declared inside of a TypeScript module and it is declared using TypeScript instead of Dojo’s declare method. Two problems arise however: 1. Foo has an error because it doesn’t honor the interface declared by dijit._TemplatedMixin 2. request is undefined

The first problem can be solved by adding the missing properties and methods, but this will only serve to clutter the code base over time. Instead, we are creating another base class that hides this requirement like so:

 module App {
    export class Foo extends WidgetBaseWithTemplatedMixin {
        constructor(public templateString= "<div>Hello TypeScript</div>",
            public message= "") {
            super();
        }

        sayMessage() {
            alert(this.message);
        }

        getServerInfo() {
            request.get("http://dojoAndTypeScriptTogetherAtLast.html", (data: string) => {
                console.log(data);
            });
        }

    }

    export class WidgetBaseWithTemplatedMixin extends dijit._WidgetBase implements dijit._TemplatedMixin {
        "attachScope": Object;
        "searchContainerNode": boolean;
        "templatePath": string;
        "templateString": string;
        buildRendering(): {}
        destroyRendering(): {}
        getCachedTemplate(templateString: String, alwaysUseString: boolean, doc: HTMLDocument): {}

    }
 }

Now the base class meets TypeScript’s requirements so it is happy. This class could easily be moved out to a general add-in file so that it can be created and forgotten since it is only here to make TypeScript happy.

The second problem that we had as that request is undefined. This is going to take a bit more trickery as shown below:

module App {    
  export class Foo extends WidgetBaseWithTemplatedMixin {
        constructor(public templateString= "<div>Hello TypeScript</div>",
            public message= "") {
            super();
        }

        public request: dojo.request;

        sayMessage() {
            alert(this.message);
        }

        getServerInfo() {
            this.request.get("http://dojoAndTypeScriptTogetherAtLast.html").then((data: string) => {
                console.log(data);
            });
        }

    }

    export class WidgetBaseWithTemplatedMixin extends dijit._WidgetBase implements dijit._TemplatedMixin {
        public static getPrototype(deps: Object) {
            if (deps) {
                for (var i in deps) {
                    this.prototype[i] = deps[i];
                }

                return this.prototype;
            }
        }

        "attachScope": Object;
        "searchContainerNode": boolean;
        "templatePath": string;
        "templateString": string;
        buildRendering(): {}
        destroyRendering(): {}
        getCachedTemplate(templateString: String, alwaysUseString: boolean, doc: HTMLDocument): {}

    }
 }


 define(['dojo/_base/declare', 'dijit/_WidgetBase', 'dijit/_TemplatedMixin', 'dojo/request'],
    function (dojoDeclare, _WidgetBase, _TemplatedMixin, request) {
        var deps = {
            request: request
        };

        var Foo = dojoDeclare([_WidgetBase, _TemplatedMixin], App.Foo.getPrototype(deps));

        return Foo;
    }
 );

Yes, I know – pretty crazy right. But, we’re getting close…

Overriding TypeScript’s class inheritence mechanism

In the Dojo module, we are building an object that contains references to each of the dependencies. We are then passing that object into the static method getPrototype that we have added to the base class. This method takes an object literal and mixes it into the class’s prototype. In this way, the module dependencies are made available to the TypeScript class via its prototype. The last thing we need to do is change the getServerInfo()’s call to request to be a this.request call since it is calling through its prototype instead of the ambient object that is used in Dojo.

Okay, great. TypeScript is happy. Everything should be working right? Wrong.

We have two more problems that are not apparent until the code is actually executed. They are both related to our usage of the extends keyword that we used to show that our Foo class extends from dijit._WidgetBase.

The first problem is that, as stated previously, TypeScript has its own implementation of a class system in JavaScript. When one class extends another, TypeScript injects the following snippet into the module:

 var __extends = this.__extends || function (d, b) {
    for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
    function __() { this.constructor = d; }
    __.prototype = b.prototype;
    d.prototype = new __();
 };

This method is called in a closure that wraps the class definition and mixes the parent’s prototype and owned properties into the child class. However, this won’t work in our case, because our base class is dijit._WidgetBase which doesn’t actually exist in the global namespace (where TypeScript expects it). This is because we are still using Dojo’s class system (via declare). This is an important, and confusing, point. Our class is actually being constructed by Dojo using declare. However, we are working with the class as if it was created in the way the TypeScript expects. In short, this means that we don’t actually need the __extends function to work, but something needs to be there so that the constructor function doesn’t die. The solve is actually relatively easy: In the main HTML page, add this function before the tag that includes dojo.js:

var __extends = function (d, b) {
    if (d && b) {
        for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
        function __() {
            this.constructor = d;
        }

        __.prototype = b.prototype;
        d.prototype = new __();
    }
};

All this does is check to see if d and b are defined before running. Since TypeScript won’t override __extends, it will allow us to override the default implementation.

Okay, only one more thing to deal with: the call to super. This issue is also related to TypeScript’s method for handling inheritance. After calling __extends, the generated constructor function will call the parent’s constructor function. Once again, we are hit by the fact that our base class (dijit._WidgetBase) doesn’t actually exist where TypeScript is expecting it. The only way around this is to give TypeScript something to call. This simplest thing to do is to add a no-op function for TypeScript to call. In short add this:

 var dijit = dijit || {};
 dijit._WidgetBase = function() {}

Into the page after Dojo bootstraps, but before our module loads. The simplest way to do this is to create a little module that does this and added it to the array of modules loaded in the define() call of the module.

Okay, so things look pretty messy right now. There are several hacks and tricks that we have to play in order to allow TypeScript and Dojo to work together. The nice thing about most of this is that it can all be shoved into a single helper module and never thought of again. Here is an example of what that module would look like:

 "use strict";

 define([], function () { });

 var __extends = function (d, b) {
    if (d && b) {
        for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
        function __() {
            this.constructor = d;
        }

        __.prototype = b.prototype;
        d.prototype = new __();
    }
 };

 window['dojo'] = {};
 window['dijit'] = {
    _WidgetBase: function () {

    }
 };

 module Base {
    function getPrototype(type: Function, deps: Object): Object {
        if (deps) {
            for (var i in deps) {
                type.prototype[i] = deps[i];
            }

            return this.prototype;
        }
    }
    export class WidgetBaseWithTemplatedMixin extends dijit._WidgetBase implements dijit._TemplatedMixin {
        public static getPrototype(deps: Object): Object {
            return getPrototype(this, deps);
        }

        "attachScope": Object;
        "searchContainerNode": boolean;
        "templatePath": string;
        "templateString": string;
        buildRendering() { }
        destroyRendering() { }
        getCachedTemplate(templateString: String, alwaysUseString: boolean, doc: HTMLDocument) { }

    }
 }

This module can then be added to whenever we have another base class / mix-in combination (e.g. dijit/_WidgetBase, dijit/_TemplatedMixin, and dijit/_WidgetsInTemplateMixin). When done this way, the only regularly visible changes that we have to do is to compose the hash of dependencies and call the getPrototype as the last argument to declare.