Developing Go in Atom

When I started developing software a few years ago, I was introduced to the wonderful world of Integrated Development Environments (IDEs) from the beginning. While these environments have their disadvantages, the primary benefit that I always got from them was the deep autocomplete functionality that they provided. This allowed me to give meaningful (and sometimes lengthy) names to functions and objects without having to worry about remembering exactly how I named it later.

When I started using Go, I looked for that same experience and found it in the excellent goclipse IDE. I used this for several years, but started to become dissatisfied with it due to the its occasional laginess and the quirkiness of Eclipse that I have never been able to get used to. As a result, I started to look for a new, text-editor based development environment. However, there are a couple of pieces that I had to put together to get it right.

The Editor

There are many text editors out there that have support for Go, so the first step was to settle that question. The big players that are out there appear to be:

Since I come from a Windows background and my history with programming is based on IDEs, I’ve never really been able to get comfortable with Vim. I believe that it is ultimately one of the more productive tools out there, but I’ve never been willing to pay the price to achieve that level of understanding so I took a pass on that.

Sublime is great and has, in many ways, led the text-editor revival that we are currently in, but I’m not a fan of its closed-source nature. Also, there was a newer kid on the block so I never really gave it the chance that I probably should have.

Eventually, I settled on the Atom editor that was initially created by github. It is open-source, easy to customize, runs on any platform and has a thriving plugin ecosystem. To be fair, I think that each of these editors makes an excellent platform to develop Go with, but eventually we all have to make a choice; this was mine.

 

The Plugins

Every text editor that wants to be taken seriously as a development environment has to address the plugin question. Without an ability for the community to expand and grow the capabilities of the tool, it can rapidly become irrelevant as technologies grow and evolve. Atom uses the “Atom Package Manager” or apm to manage the plugins that are used by the editor. Apm can be accessed via a terminal window (via the ‘apm’ command) or within Atom under the Settings menu. For Go development, the plugin of choice is go-plus that is supported by Joe Fitzgerald. It exposes quite a few tools that are used when developing with Go and even has functionality to retrieve any tools that are missing and update them. Here is a summary of the integration that go-plus provides:

  • Automatic code formatting via `gofmt`, `goimports`, or `goreturns`
  • Code quality inspection via `go vet`
  • Linting via `golint`
  • Test coverage generation via `go test`
  • Autocompletion via gocode (more on this in a bit)

Autocompletion

Go-plus is almost all you need to get working in Go. It’s ability to bootstrap itself with the latest tools and keep them up to date means that you generally don’t have to worry it. Go-plus will take care of things for you. However, there was one tricky bit that I had to work through in order to dial everything in the way that I wanted to.

In Go, autocompletion is normally provided by the gocode library. It’s been around since July of 2009 and has become the de facto standard for providing autocompletion support. It subscribes to the popular trend of providing a service that other tools can plug into. It is started as a service that waits for autocompletion requests and then inspects all of the code in the current file and on the GOPATH to determine the possible matches. Since it does this in a platform agnostic way, it can be integrated into a wide variety of development environments and, in fact, forms the basis for autocompetion in Vim, Emacs, Sublime, Atom, Goclipse, and several others.

When I first started working with this stack, everything seemed great. I had quick, reliable tools that made developing in Go pleasant. However, I started to run into problems as soon as I stared using multiple packages in my application. It seems that gocode couldn’t find the types and functions in my packages, but it could for any built-in functions and third party libraries that I had installed via the `go get` command. It took some digging, but I eventually discovered the answer and had to add one more tool to my development stack. First, let’s describe the issue.

The Problem

When gocode is called upon to provide the autocompletion options, it examines all of the code in the current buffer as well as code that is accessible via the GOPATH. Initially, I thought that it was parsing the source code in order to do that. What I discovered was that gocode actually inspects the libraries that are compiled and stored in the projects ‘/pkg’ directory when `go get` is used to retrieve a library. Since my packages weren’t compiled, gocode couldn’t find then and, therefore, couldn’t provide me will all of the correct options. I could get things to work as expected by using the `go install` command to install the package; gocode would immediately light up with the correct options so I knew I had identified the problem. I did not, however, want to manually install each of my packages whenever I changed a line of code. Something had to be done.

The Solve – Automatically Building Packages

While I understood why gocode took this step, I was left with a problem – any non-trivial application that I was going to write needed to have multiple packages and I didn’t want my autocompletion tool to occasionally fail on me. After looking around a bit, I didn’t see any standard work arounds for this, so I rolled my own. Primarily, I am a web-developer and so have become used to working with Node-based automation tools such as gulp and grunt to automate client-side tasks such as running unit tests and minifying JavaScript. Since I was already using gulp in my project, I decided that it would be a convenient place to build a small script that would install my custom packages whenever they changed.

I am not a gulp expert and I wanted something quick, so I had to research for a while until I got something working. Here is what I eventually came up with:

var gulp = require('gulp');
var path = require('path');
var shell = require('gulp-shell');
var goPath = 'src/pack/**/*.go';

gulp.task('compilepkg', function() {
  return gulp.src(goPath, {read: false})
    .pipe(shell(['go install <%= stripPath(file.path) %>'],
      {
          templateData: {
            stripPath: function(filePath) {
              var subPath = filePath.substring(process.cwd().length + 5);
              var pkg = subPath.substring(0, subPath.lastIndexOf(path.sep));
              return pkg;
            }
          }
      })
    );
});

gulp.task('watch', function() {
  gulp.watch(goPath, ['compilepkg']);
});

Let’s break this down one piece at a time. At the top of the file are the imports that make different modules available to the script (similar to import statements in Go). After that is a variable that holds the pattern for files that I wanted to keep an eye on. This demo project had everything under the package “pack” and so you see that in the second position. This could easily have been changed to ‘src/**/*.go’, but I had other code floating around this app that I didn’t want to get autocompiled and so I needed to limit things a bit.

The next block of code is the definition of the task itself. It starts with the task’s name – compilepkg and is followed by the function that actually installs the package. The function calls the gulp.src function that tells gulp which files that we’re interested in processing. It then passes files that match that pattern into the pipe function. That function calls the shell function which invokes a command in the operating system’s shell. The rest of the code in this section it just fiddling around with the path in order to convert the absolute path to the package name that the `go install` command requires.

At the end of the script, I defined a ‘watch’ task that starts watching for the relevant Go files. When one of them changes, then its entire package is recompiled and ready for gocode to use.

Whenever I start working on a project, I simply open a terminal and type `gulp watch` in the project’s root folder to kick everything off and I’m ready to Go.

 

If you want to learn more about Go, checkout my courses on Pluralsight. I regularly create new content that explores different aspects of working with Go.

Go + The Gorilla Toolkit = A Complete Web Development Framework

Go comes out of the box with an excellent set of tools for setting up a basic web application. It contains first class handling for HTTP messages, cookies, headers, and the like, but it was created as a systems programming language, not a web-development language. This has led to some critical pieces of the standard web-development stack being lacking in Go’s base library.

Enter the Gorilla Toolkit

The Gorilla Toolkit attempts to fill in the gaps that exist in Go’s core library by adding in nine packages that provide supplemental functionality. Some of the packages address features that are obviously missing and have become ubiquitous in web application development. Other offer different implementations of features that are already part of the base library, but with a different implementation to get around a limitation.

The Packages

Gorilla/Mux

This package adds parameterized routes to Go’s basic route handling logic. With Go’s basic route handlers, it will find the route that is the best match for a given request and pass the request to that handler for processing however, these routes have to be statically defined. This means that the RESTful route: “/purchaseorders/42” which would presumably get the purchase order with the ID “42”, cannot be handled since the 42 is dynamic and the basic route handler isn’t designed to handle that.

Mux contains an entire suite of route matching logic that allows route handlers to be matched based on a route pattern, information contained in headers, query parameters or just about anything else that can be contained in the header section of an HTTP request.

Gorilla/Reverse

The gorilla/reverse package addresses the general case of matching a request to a handler. Where the mux package deals specifically with route handling, the reverse package uses matchers to give a Boolean (true or false) value about whether a request matches a route or not. This allows a general section of code to be used to triggered based on the request. This is useful for things like request filters that are monitoring requests, but aren’t the primary handlers of the request.

The reverse package also contains an interesting set of functions that will combine a regular expression with set of parameters and turn it back into a string. This is the reverse of how regular expressions are used and, I think, how the package got its name. While the use cases for this capability are not immediately obvious, there are some very powerful capabilities that this allows, such as generating route templates and hyperlinks from the same basic regular expressions and, hence, ensuring that route patterns are automatically synchronized throughout the application.

Gorilla/Context

Most web applications do not simply handle requests via a single request handler. Instead, additional sections of code, often called “filters” or “middleware” gain access to the HTTP request and can handle different situations that are not the primary responsibility of the request handlers, such as enforcing authentication requirements.

The gorilla/context package allows information to be stored for the lifetime of the request in an object that is keyed to the request itself. This provides a way for the filters to add information into the request stream without becoming too tightly coupled to the other request handlers.

Gorilla/Schema

Go comes with good, basic form handling that is bundled into the net/http/Request object. However, processing long forms with a lot of information can get cumbersome. This strategy also doesn’t deal well with the conversion of flat forms into type hierarchies. The gorilla/schema package is designed to provide a turn-key solution to form handling by describing a set of naming conventions for form fields that allows forms to be parsed into complex object graphs complete with slices in just a couple of lines of code.

Gorilla/Securecookie

Arguably, one of the largest features that is missing from Go’s basic library, at least when speaking about web applications, is the ability to secure cookies. Since cookies are often used to store sensitive user information, such as authorization tokens, they are critical points in the security of a web application that must be carefully handled.

The securecookie package comes with two enhancements to basic cookies that address this shortcoming. First, all cookies that are processed with the securecookie package get bundled up with an HMAC hash that ensures that the cookie originated from the server itself. While this means that the cookie can’t be generated by the browser, it also means that the browser can’t be generated by malicious JavaScript that manages to infect the pagel. Additionally, cookies can be encrypted to take a further step to ensuring that the content of the cookie can’t be read. This is ideal for circumstances where the server is passing data through the client so that it comes back to the server, allowing HTTP requests to be associated together.

Gorilla/RPC

Go actually comes with a pretty solid offering around remote-procedure calls and so the gorilla/rpc package does not immediately appear to be necessary. However, this package reimplements RPC as a HTTP round-trip and, thus is able to run in contexts where straight TCP connections aren’t a viable option. Additionally, this package comes with some enhancements that make RPC servers more flexible and able to server more clients.

As a part of the RPC implementation, Gorilla provides two protocols – JSON-RPC and protorpc to send and receive RPC requests. This gives, plus a pluggable model that enables additional protocols to be created, allows a great deal of flexibility when creating RPC servers.

Gorilla/Sessions

Sessions are perhaps one of the oldest aspects of web applications that Go doesn’t support out of the box. This ability to store information related to a single HTTP client across multiple connections is often used to enforce security requirements and implement e-commerce shopping carts. The gorilla/session package sits on top of the gorilla/context and gorilla/securecookie packages to provide a straightforward and easy to use session management system for Go.

Gorilla/WebSocket

While sessions are a classic tool that is not included in Go, the Gorilla Toolkit also attempts to implement some of the latest features of web applications, such as WebSockets. These special connections basically take an HTTP request and convert it into a persistent TCP connection with a thin wrapper over top of it. This persistence allows the server to send messages back down to the client without being prompted beforehand. This approach allows the server to be much more active in the management of the clients and allows powerful new capabilities such as chat clients and real-time logging.

Summary

All in all, Go, when enhanced by the Gorilla Toolkit becomes a powerful web application development platform that can compete with any other web-development language out there.

If you are interested in learning more about Go and the Gorilla Toolkit, checkout Pluralsight where there is not over 20 hours of content about the Go programming language. Specifically, you can watch my new course specifically on the Gorilla Toolkit to learn more about how to integrate Gorilla into you application.

Creating Web Applications with Google’s Go

042915_1143_CreatingWeb1

Since I first heard about it, Google’s Go language (http://golang.org) has fascinated me. I love the idea of a fully compiled language as a way to get maximum performance. Add in the powerful, yet elegant concurrent programming model and you have a strong language. But the language designers didn’t stop there, however. They went on to actually make the language…simple. Writing Go feels a lot like writing in JavaScript (my first love), with functions as first class citizens, managed memory, and a quirky object model to boot. However, it is strongly typed and supports massive concurrency via green threads and the actor model for concurrency (aka communicating sequential processes) that erlang uses to such great effect. Additionally, a powerful, yet compact standard library makes it easy to get started building production quality applications.

Learn by Teaching

Since Go is not a blessed language with my current employer, I am always looking for excuses to work with it. Luckily, I was given the opportunity to create a course for Pluralsight that gave me the chance to show what I’ve learned and deepen that knowledge throughout the course-creation process.

The Course

In order to solidify and strengthen my understanding of the language, I decided to focus on developing the course by only using the core libraries. While I wasn’t able to do this completely (parameterized routing is not directly supported), I was surprised at how close I could get. Here is a breakdown of the course and what I cover:

Creating a Resource Server

In this module, the basic components of building a web application are explored. I start with a simple HTTP listener and grow the demo code out to serve as a full resource server, providing static files from the server.

HTML Templates

Go has a very powerful and flexible template system. So much so that a full exploration of this topic could justify a course by itself (and might someday…). In this module, I explore the basic elements of the template system that allow for simple, yet powerful templates to be used to create the view layer of the application. Among the topics covered are: data binding, branching and looping, and template composition (using templates from within other templates).

Building an Model, View, Controller (MVC) Style Application

Go’s basic library comes as a toolkit, meaning that it exposes a collection of components but doesn’t prescribe how they should be composed together. While there are many third party web-frameworks for Go, I chose to walk through the process of creating an MVC framework from scratch using, whenever possible, only core tools. As a software architect by day, I found this process extremely enjoyable and remarkably easy due to the high quality tools that come in the toolkit.

The View Layer

This module takes the lessons learned with templates and applies them to a demonstration project that was designed to simulate a real world problem (a basic e-commerce site for supplying lemonade stands). At the beginning of the module, all of the pages are served as static HTML. By the end, the pages are fully data driven, using templates and a primitive router to allow the server to dynamically manipulate the content that is served.

The Controller Layer

When creating the view layer, a primitive and poorly factored router is included in the application’s main function to handle serving different pages. In this module, a dedicated controller layer is built, complete with a front controller that serves as the router and a collection of back controllers that serve the requests.

This module is broken up into two parts. At first the controllers just serve GET requests, including parameterized routes. Since parameterized routes are not supported out of the box, the awesome “mux” package from the gorilla toolkit was called upon to fill the gap. The second part of the controller discussion deals with handling data submitted by the user, including posted HTML forms and POST requested sent using Ajax.

The Model Layer

Normally, the model layer of an application holds the specific business rules of the project. As a result, this layer typically ends up being “normal” code for the language, without any specific elements that indicate that it is part of a web application. I take advantage of this pause in the action to introduce an element that I feel is missing in many “build and app” style courses like this – testing. While I don’t go into extensive detail about how to use Go’s testing framework, the system is pretty straight forward and it doesn’t take long to create the foundation for a strong unit testing suite.

Persisting Data

The final element of a web application is loading and saving data from persistence layer, typically a database of some sort. I cover this by creating a basic system for logging a user in with an encrypted password and using that information to create a database-tracked session object to allow the user to avoid logging in each time they visit.

Summary

Overall, I was impressed with how straight forward and easy it was to create a full web application with Go. As you might expect, it is challenging to have demonstrations that are sophisticated enough to be realistic but simple enough to be easily explained. Go’s simple programming model and focused library allowed me to use a pretty high-level demonstration project, hopefully giving a good context for how you might use Go in your own projects.

If you are interested, check out the course here: http://www.pluralsight.com/courses/creating-web-applications-go and let me know what you think.

My Design Process – Sequence Diagrams

I am currently doing a series of posts where I am documenting my personal software design process. This process has been developed to support the environment that my designs are currently deployed into – a fairly large retail company that needs to control the rate that applications are deployed to the stores in order to limit the number of distractions that threaten to pull the sales staff from their primary role of, well, selling.

I make no claim that this should be viewed as the “one-right way” to design applications. I imagine that I would have a radically different process if I were building publically facing websites for a conference. My goal is to document my process for solving the problems that I am facing.

Stages

Currently, my design process flows through the following stages

Component Design

This stage involves the specification of the high-level components that make up the proposed software system and describes, very generally, how they will interact.

Requirements Gathering

In the requirements gathering stage, all available documentation from the business owners are gathered and recorded. This often includes conversations with program management to ensure that each requirement is understood.

Use-Case Design

The requirements are then distilled down into individual use-cases that the software system will implement in order to meet the requirements. Also, the components that are likely involved in the use-case are identified.

Activity Diagrams and Wireframing

This stage involves the creation of activity diagrams (aka flow charts) to show how the user and system components will actually interact to implement each use-case. Also, since the interaction of the user and the system are starting to be specified, the structure of the user-interface (aka wireframes) are created at this time as well.

Sequence Diagrams

The final, and most time intensive stage, is the creation of sequence diagrams. These diagrams contain the detailed information about how the system operates to implement the process illustrated by the activity diagrams.

Sequence Diagrams

A sequence diagram is a tool that identifies the specific, detailed process that an application uses to implement a required set of functionality. There diagrams are created for every swimlane on each activity diagram that we created in our previous step, thus quite a few diagrams are typically required. In the hierarchy of design, this is at the lowest level, lying below activity diagrams and just above the code itself. As a result, it starts to look a lot like a program. However, its role is to establish the messages (or method calls) that are passed between the various objects in the application. It stops short of describing how the messages are processed. When completed, the entire structure of the application becomes known.

As you might imagine, this combination of the number of diagrams and the detail of those diagrams can lead to an extremely large time investment and, if possible, should be avoided. However, there are many times when this level of detailed is desired, especially when the implementation of an application is going to be performed by a team with unknown skill level. Additionally, if the team is highly distributed, then it can be an advantage to have a single team member perform this analysis so that the completed application is more cohesive. One final benefit can be realized if there are doubts about the best way to implement the application. By taking this detailed step, the understanding of how the system is going to function grows considerably. This can lead to a large number of refactorings that increase the organization and clarity of the application’s structure. Since the only things that have to change are the sequence diagrams, the refactoring can be done very quickly, much like reworking a wireframe to meet a customer’s expectation instead of waiting until the applications user-interface is fully coded and then trying to make changes.

So, we’ve talked a lot about what this diagrams are and why they can be useful, so let’s see an example of one and then analyze it. Below is a sequence diagram of how a web application might handle the login process for a user:

032315_0233_MyDesignPro1

We start the diagram in the upper left corner. Notice the filled circle with the arrow leading into the login controller. This is what is called a “found message”, meaning that the diagram doesn’t know where it came from, but it does know how to handle it. In this example, the found message is a POST request to the URL /login. In my designs I use this type of signal to document the URL that the server will be listening on to trigger a certain sequence. The rest of the diagram will the document how the application responds to the request. The found message kicks of a “recursive message” (the object calls itself). Normally, I use this on controllers to document the controller method that will actually handle the request. This method’s signature documents the data that is expected to be transmitted to the controller, in this case the username and password.

The next step in the process is for the controller to call into the model layer and try to retrieve the user via a call to the user model’s “getUser” method. The model layer is normally responsible for managing the business logic for the application. In our example, this involves two messages. First, a call is made to hash the password. We don’t document the method to do that (although we could). We do specify that the hashing is to be done by another method within the UserModel itself though. It returns the hashed password to the calling method, presumably for use later. That “later” happens right away. The model calls the user dao (aka data access object) to retrieve the user from the database. The username is passed through, but the password is implied to be the hashed password from the previous step. If necessary, this can be documented by adding an annotation to the diagram, but I normally try to avoid this, if possible.

The user dao has the responsibility of taking a username and password and returning a User object. This is to be done by interrogating the database and attempting to return the requested instance. The result of this call is sent back to the model object which returns it to the controller.

The controller now has a decision to make. If a user object is returned, then we forward the user on to the home page of the application. If an object wasn’t found (i.e. we received a “null” response), then we send them to an error page where they will probably be given the opportunity to try to log in again.

This example is certainly incomplete, but it illustrates many of the important points of a sequence diagram. One critical point that I haven’t mentioned is what this diagram does not do. Notice that this diagram doesn’t take responsibility for where the request comes from or exactly how the response is handled. Those details are the responsibility of other parts of the application.

The final thing that I should mention is that, as powerful as sequence diagrams are, they are not the only set of artifacts that are generated in this stage. Sequence diagrams are great for the objects in the application that are function providers, such as controllers, models, and daos. However, we don’t have a good way to show the contents of the User object that is retrieved by the dao in our example. This is an important element of the design and needs to be described as well. In order to finish our example, let’s include that as well.

Class Diagrams

032315_0233_MyDesignPro2

A class diagram is a tool that describes the internal structure of classes and how classes relate to each other. While there are a lot of things that we can describe with these diagrams, this simple example shows most of the elements that I normally use. Notice the User class denoted by the rectangle on the left hand side. Below the name of the class is a list of the data fields contained within the class. The minus sign indicates that they are private to the class. These signs are followed by the name of the field and then the data type that the field will hold. The next section shows the methods on the class. They are described using similar syntax that is used for the fields. Within the parenthesis, we describe the arguments that the method will accept and the types of those arguments. After the parenthesis, we list the type of data returned by the method, if any is present.

The final thing that I want to call out is the linkage between the “Role” class and the “User” class. This line indicates some type of relationship exists between the two objects. Specifically, in this example, the diamond symbol indicates that the User class contains a list of Role objects.

 

Summary

As you might imagine, building out this level of detail throughout an entire application takes a considerable amount of planning and effort. However, it pales in comparison to the effort required in writing the application itself. This fact, when combined with the relative ease in which the design can be adjusted and refactored makes it a valuable step in the process when the designer needs to be very clear about how the application is to be constructed. However, there are many cases when this level of detail isn’t necessary. If the development team has a strong standard for how applications are to be designed, or they are skilled in modern best practices, then the cost of this stage might not be justified.

I hope that the last few weeks have proven helpful. Maybe they have provided insight into a process that you can find application for. They may have also helped you understand why this process is not the correct one for your situation. Either way, I appreciate you taking the time to read this series. Till next time, happy coding!

My Design Process – Activity Diagrams and Wireframing

I am currently doing a series of posts where I am documenting my personal software design process. This process has been developed to support the environment that my designs are currently deployed into – a fairly large retail company that needs to control the rate that applications are deployed to the stores in order to limit the number of distractions that threaten to pull the sales staff from their primary role of, well, selling.

I make no claim that this should be viewed as the “one-right way” to design applications. I imagine that I would have a radically different process if I were building publically facing websites for a conference. My goal is to document my process for solving the problems that I am facing.

Stages

Currently, my design process flows through the following stages

Component Design

This stage involves the specification of the high-level components that make up the proposed software system and describes, very generally, how they will interact.

Requirements Gathering

In the requirements gathering stage, all available documentation from the business owners are gathered and recorded. This often includes conversations with program management to ensure that each requirement is understood.

Use-Case Design

The requirements are then distilled down into individual use-cases that the software system will implement in order to meet the requirements. Also, the components that are likely involved in the use-case are identified.

Activity Diagrams and Wireframing

This stage involves the creation of activity diagrams (aka flow charts) to show how the user and system components will actually interact to implement each use-case. Also, since the interaction of the user and the system are starting to be specified, the structure of the user-interface (aka wireframes) are created at this time as well.

Sequence Diagrams

The final, and most time intensive stage, is the creation of sequence diagrams. These diagrams contain the detailed information about how the system operates to implement the process illustrated by the activity diagrams.

Wireframing

Wireframes are described on Wikipedia as being a visual guide that describes the skeletal structure of a user interface. In other words, a wireframe describes what will be on each screen of the application. This content normally includes text fields and tables of information for the user, buttons that allow the user to take actions, and navigation controls to allow the user to move through the application to complete their work. When I started building this process, I followed the classic approach of using the wireframe to describe the structure of a page, but not give any advice to the styling (colors, fonts, etc). This leads to artifacts that look something like this:

030615_1623_MyDesignPro1

This wireframe describes the home page for a demonstration project that I am using in my current Pluralsight course, but it serves as a good example of a classic wireframe. You can see that we have some sort of image in the upper left corner (denoted by the box with the x through it). This is, presumably, a company logo. In the upper right, we see a region that holds links for us to navigate through the site. The main region of the page is taken by some text (the gray bars), followed by the three primary categories that we are driving the user to. At the bottom of the page, we have a small section to contain social media links.

Wireframes like this are intended to facilitate discussions about how an application should be structured without introducing styling cues that can distract from the important discussion about how the application will function. However, I have found that the users that I often interact with are not able to separate these topics so easily. As a result, the lack of styling becomes a distraction. In effect, it backfires. As a result, I’ve started created wireframes that look like this:

030615_1623_MyDesignPro2

As you can see, I have added colors, images, and typography to describe the visual design of the application as well as the structure. This is often done by partnering with a visual designer to deliver the best visual experience possible. While this is more time-intensive than a simple gray-scale image, it is still much faster to iterate on these images than is possible in the final application code.

So why, you may ask, is this the right time to create wireframes? Well it turns out that the other part of this step, the creation of activity diagrams, will describe how the user interacts with the system to implement each use-case described in the previous step. Since the user often requires a visual interface to facilitate this interaction, it makes sense to capture how the application exposes the function in the wireframe as well as how the system will respond to that interaction via the activity diagrams.

Activity Diagrams

In the last post, I discussed how I create use-case diagrams in order to identify the ways in which a user will be interacting with the system that is being designed. This allowed us to identify the highest level functions that will be a part of the system in order to meet the requirements that were gathered in the second phase. Additionally, the components of the system that will probably be involved in the implementation of the function.

Activity diagrams are used to expand the use case with the intention of describing how the user and components will actually interact. These “activities” specify what the component does including any communication that it will have with the other systems. To illustrate this, let’s walk through how a user might log into an application. For this discussion, let’s adopt the component design from our first stage that looks like this:

3

This system involves a web browser that will display the user interface and send user actions to the server for processing. The server consists of a controller layer, which will receive the browser’s request for action, a model layer which will implement the business logic and interact with the data store, and a view layer which will prepare a response for the browser to display the effect of their action. Finally, a data store is present to provide long-term storage for the application.

When considering this system, and the user’s desire to login, we end up with a use-case diagram like this:

030615_1623_MyDesignPro4

As you can see, we are expecting that every component in our system will have a role to play in the login process, but we haven’t specified any detail yet. The Activity Diagram will supply that.

Step 1 – Swim lanes

The first step in creating the activity diagram is to make sure that the diagram stays organized. Our goal for this is to describe the specific functions that each component must have in order to implement the use case. Swimlanes organize the activities in a way that allows the functions for each component to be easily identified. We are going to need one lane per component plus the user, so our initial diagram looks like this:

030615_1623_MyDesignPro5

Step 2 – Adding Activities

The next step is to start to describe how the components will interact.

030615_1623_MyDesignPro6

We show here that the user starts the process by clicking on the login link. The browser will then forward the request to the controller which simply forward the request to the view layer. It sends the page back to the browser which renders it. Notice that the lines of communication follow the component diagram and each activity clearly describes what the component needs to do at each step. You’ll also notice that the activities are not overly technical. The intention is to keep ourselves in the mind set of mapping the process, not the actual programming constructs that are involved in implementing the process.

So far, we have the login page show to the user, let’s continue to flesh this out by handling the user submitting their credentials.

030615_1623_MyDesignPro7

At this point, we have engaged every component in the system to lookup the user’s credentials, so it looks like our use-case diagram was correct. This isn’t always the case, and that’s fine. The goal of my layered approach is to allow me to focus on one part of the problem at a time. As I move through the layers, I always discover bad assumptions that I made previously. That is actually one of the key benefits that I reap from this technique – by constantly going over through the design, but digging a bit deeper every time, my overall understanding of the solution grows.

Step 3 – Branching Flows

We do, however, have a bit of a problem: what happens if the user enters the wrong credentials? So far our diagram has had a nice, linear flow. Our next step will have to handle the case where the user logs in successfully, and one where they fail to login. Our next diagram will show that.

030615_1623_MyDesignPro8

Finally, we have the full flow of the activity defined. Notice the diamond symbol in the server-controller’s swim lane. That symbol indicates that the activity will take different paths based on some condition. In this case, if the user record was found, we direct to the logged view, otherwise, we will request a login failure. This resides in the controller since it is what is responsible for directing the response based on what the model gives it. After that decision is made, the controller will request the correct view from the view layer. Then we have the last new symbol. That thick vertical bar at the bottom of the view swim lane indicates that multiple paths are joining back together. This is done because the browser will have the same action of showing the view regardless of which view it receives.

Summary

Until this point, the design process has been working to define the boundaries of the system (via requirements gathering and component diagram creation) and how they interact at a very high level (via use case diagrams). With that landscape defined, wireframes and activity diagrams are used to determine what the application’s interfaces will look like and how the components of the system will work together to provide the required functionality. By organizing the diagrams into swim lanes, a clear picture of each component’s functions can be seen.

Depending on who will be building the software, this may be the last step of the process. However, there are many times when we want even more clarity on how the application will be built in order to comply with internal development standards. In those situations, the last, and most time-intensive part of the process will be started – the creation of sequence diagrams. These diagrams are created for every component in each of the activity diagrams. Each of these diagrams illustrates the actual software objects (e.g. classes and objects) that are being called along with their message signatures. In short, this next level is as close as we can get to defining the software system without actually writing code.

My Design Process – Use-Case Diagrams

I am currently doing a series of posts where I am documenting my personal software design process. This process has been developed to support the environment that my designs are currently deployed into – a fairly large retail company that needs to control the rate that applications are deployed to the stores in order to limit the number of distractions that threaten to pull the sales staff from their primary role of, well, selling.

I make no claim that this should be viewed as the “one-right way” to design applications. I imagine that I would have a radically different process if I were building publically facing websites for a conference. My goal is to document my process for solving the problems that I am facing.

Stages

Currently, my design process flows through the following stages

Component Design

This stage involves the specification of the high-level components that make up the proposed software system and describes, very generally, how they will interact.

Requirements Gathering

In the requirements gathering stage, all available documentation from the business owners are gathered and recorded. This often includes conversations with program management to ensure that each requirement is understood.

Use-Case Design

The requirements are then distilled down into individual use-cases that the software system will implement in order to meet the requirements. Also, the components that are likely involved in the use-case are identified.

Activity Diagrams and Wireframing

This stage involves the creation of activity diagrams (aka flow charts) to show how the user and system components will actually interact to implement each use-case. Also, since the interaction of the user and the system are starting the be specified, the structure of the user-interface (aka wireframes) are created at this time as well.

Sequence Diagrams

The final, and most time intensive stage, is the creation of sequence diagrams. These diagrams contain the detailed information about how the system operates to implement the process illustrated by the activity diagrams.

Use-Case Diagrams

In this post, we’ll focus on one of the simplest, and most vital, parts of the process – the derivation of the use-cases. As the name implies a use-case is a description of one way that the system will be used. The “users” may be customers, employees, or even other software systems. Ideally, a use-case is described by a simple verb-noun pairing. This allows us to describe what the use-case is while avoiding the temptation to start talking about how that use-case will be implemented.

To illustrate this process, I would like to leave the world of software systems for a bit. The reason for this is that the concept, and the value that I derive from it is best illustrated in a distributed system. While I could start introducing an example that uses service-oriented architectures (SOA) and enterprise services buses (ESB), my goal is to make my process as easy to understand as possible and these abstract concepts would make that more difficult. Instead, let’s tackle something that most people have used, or at least heard of – a car.

A car is a distributed system because it is, in fact, not one machine, but many. We are all familiar with the fact that a car has an engine, a transmission, and brakes. Not surprisingly, there are many other systems that work together to move you safely down the highway at over 100 feet per second.

To start our discussion, we need to honor our first two steps – create a component diagram and gather requirements. In the interest of clarity, we will simplify our car quite a bit and use the following component diagram:

022515_1252_MyDesignPro1

We’ll keep the requirements simple as well, let’s go with this list:

  • Must be able to accelerate from 0 – 60 mph in 8 seconds
  • Must be able to accelerate from 0 – 100 mph in 23 seconds
  • Must be able to stop from 60 mph in 150 feet on dry asphalt pavement
  • Turning circle to be a maximum of 38 feet

As you might imagine a real car is quite a bit more sophisticated and the requirements extend into the thousands, but I think this will be enough to get us going.

In my process, use-case diagrams are the first time in which requirements and the components of the system are joined together. This is a critical step in the process and sets us up for the rest of the design process. However, the diagrams that we generate are not all that sophisticated. As I’ve mentioned before, a use-case is ideally described by a simple verb-noun pair. I use them to link the actors of the system (i.e. the users) with the components of the system that will be involved in implementing the use-case. Additionally, I like to list the requirement(s) that are met by the use case in order to assure that we don’t lose sight of them as the design continues to evolve. Let’s build that up one step at a time.

All of the requirements involve the driver of the car interacting with the system. So, let’s start with them:

022515_1252_MyDesignPro2

Like it? I thought you would. If not, then keep in mind – this is engineering, not art. The point is that we have identified an “actor” that will interact with the system. Now we need to identify a use-case for this actor. If we look at the first two requirements, it appears that the driver needs to be able to accelerate the car. Let’s add that use case to our diagram.

022515_1252_MyDesignPro3

Okay, so far, so good. We understand one of the things that our car needs to do (accelerate). While this is technically a complete diagram, I add more information in order to help me document what components of the system will work to implement the use case. So, let’s think this through:

  • A car should normally accelerate only in response to the driver’s request, so the driver controls will probably be involved
  • The engine is what powers the car, since we need to add energy to the car to make it go faster, we’ll probably need the engine to provide it
  • The wheels are what actually transmit the power to the road, so we’ll need them
  • The transmission has a lot of jobs, but one of them is to transmit the energy from the engine to the wheels. Since the engine and wheels are involved, the transmission needs to be as well.

As a final part of this step, I like to highlight the interface that the actor will use to interact with the system by linking that to the use case. Putting it all together gives us this diagram:

022515_1252_MyDesignPro4

Now we are getting some useful information. We have a very simple function that the system will provide (accelerate car) and the components of the system that will enable that function to happen.

As a final step, let’s document the requirements that this use case will be responsible for fulfilling. It looks like the first two requirements involve the acceleration of the car. Let’s add them to the diagram now:

022515_1252_MyDesignPro5

With this last addition, we have documented a use-case of the system, the components that will be involved in providing that use-case, and the requirements that will have to be met in order to determine if the use-case was successfully implemented. As you can see, the diagram is relatively simple, but provides the vital function of bringing everything together.

Summary

A use-case diagrams primary responsibility is to document the actors of the system and the uses that they have for that system. We have extended that concept a bit to include the major subsystems that we think will come to play in providing the needed functionality. We have also included the requirements that determine whether the system is functioning properly or not. The next step in the process is the creation of wireframes, which will start to define our user-interface, and activity diagrams which will start to assign specific functions that each component will need to provide in order to implement the use-case. We’ll see how that happens next time.

 

New Pluralsight Course – Creating Custom Builds with Dojo

My latest course for Pluralsight has just been released, Creating Custom Builds with Dojo. Check it out here and all of my courses here.

If you are working on web applications with large amounts of JavaScript, you are probably working with a suite of tools in order to prepare the JavaScript, CSS, and HTML (in the form of templates) for production. If you aren’t using Dojo, you are probably using something like Grunt or Gulp to coordinate the build. Like a lot of JavaScript projects, these tools require the inclusion of many different libraries to accomplish the build. While this works well, the story for Dojo is perhaps a little more straightforward. Included in the source download (obtainable here), Dojo’s build system provides an all-in-one solution for optimizing a project for delivery to the client.

“Building” JavaScript

If, by chance, you aren’t using Grunt or Gulp, you might not be aware of what it means to “build” a JavaScript project. As you probably know, JavaScript is not compiled before being sent to the client. As such, building has a different meaning than it does for, say, an application written in C++. In this context, building means that the assets that make up the application are optimized for delivery via HTTP. Generally, this means that two things: the number of requests are minimized and the size of those requests should be as small as possible.

Types of Optimizations

For Dojo, three types of assets are optimized: JavaScript modules, HTML templates (used by custom widgets) and CSS files.

JavaScript Modules

JavaScript modules are optimized by the Dojo build system using two different techniques. First, the files are compressed or minified using one of three compression utilities: ShrinkSafe (by the Dojo Foundation), UglifyJS, or the Closure Compiler from Google. With little configuration, these tools can easily reduce the size of a JavaScript module by 30 – 60%. By taking advantage of the many options that are available in the build system and the tools themselves, even higher levels of compression are achievable.

The second method that the build system uses to optimize code is by bundling modules together into what is known as a “layer”. These are special modules that have a set of simple modules along with all of their dependencies combined into a single, AMD compliant file. When properly configured, all of the JavaScript for an application can be loaded in a single HTTP request.

HTML Templates

Many Dojo applications take advantage of the powerful widgeting framework that comes included in Dojo called Dijit. These custom widgets often involve the creation of HTML templates that form the structure of the widget. In a non-trivial application, this can lead to dozens or hundreds of HTTP calls being made. The build system handle this by detecting when a widget that is part of a layer includes a template. These templates are then built into the layer, just like all of the other dependencies so that no requests are required to load them.

CSS Files

If you have been working with client-side for any length of time, then you probably have a structure similar to this in your project:

022215_2151_NewPluralsi1

This keeps all of the files nicely organized and works for many applications. However, when using Dojo and its build system, the best approach is to do this:

022215_2151_NewPluralsi2

Notice that the CSS has been moved into the app folder. Assuming that the app folder is a Dojo package, then the build system is able to detect these files. When it discovers a CSS file, it will take two actions. First, as you might expect, the CSS is compressed and any unnecessary white space is eliminated. Additionally, the build system will look for any @import directives and, when discovered will include the imported CSS directly into the requested file. This combined bundling and minification ensures that the styling of the application is delivered in the most efficient way possible.

What is Covered?

While Dojo’s build system is well documented on Dojo Toolkit.org, it can be a bit intimidating to navigate the huge number of options that are available to tune a build to deliver an optimized experience. My Pluralsight course breaks down the build system and deals the components one at a time. Each concept of each step is introduced individually and normally accompanied by a demo that shows the impact of that specific setting. Over the two hours of the course, this provides a strong understand of each component of the system and how they integrate together. The final module even covers quite a few options that are useful in some scenarios, but might not be useful in every project.

Feel free to check the course out and let me know what you think

My Design Process – Requirements Gathering

I am currently doing a series of posts where I am documenting my personal software design process. This process has been developed to support the environment that my designs are currently deployed into – a fairly large retail company that needs to control the rate that applications are deployed to the stores in order to limit the number of distractions that threaten to pull the sales staff from their primary role of, well, selling.

I make no claim that this should be viewed as the “one-right way” to design applications. I imagine that I would have a radically different process if I were building publically facing websites for a conference. My goal is to document my process for solving the problems that I am facing.

My first post gave an overview of the process, but I’ll repeat some of it here for convenience.

Stages

Currently, my design process flows through the following stages

Component Design

This stage involves the specification of the high-level components that make up the proposed software system and describes, very generally, how they will interact.

Requirements Gathering

In the requirements gathering stage, all available documentation from the business owners are gathered and recorded. This often includes conversations with program management to ensure that each requirement is understood.

Use-Case Design

The requirements are then distilled down into individual use-cases that the software system will implement in order to meet the requirements. Also, the components that are likely involved in the use-case are identified.

Activity Diagrams and Wireframing

This stage involves the creation of activity diagrams (aka flow charts) to show how the user and system components will actually interact to implement each use-case. Also, since the interaction of the user and the system are starting the be specified, the structure of the user-interface (aka wireframes) are created at this time as well.

Sequence Diagrams

The final, and most time intensive stage, is the creation of sequence diagrams. These diagrams contain the detailed information about how the system operates to implement the process illustrated by the activity diagrams.

Requirements Gathering

In this post, I would like to walk through the process that I use to gather and distill the application’s requirements down to a manageable subset. These will then define the functionality that the rest of the design should enable.

In the last post, we developed a component design that looks like this:

3

To summarize that post, we have three high level components:

       A web browser that will receive server-generated HTML and send data back using standard form posts

       A data store that is responsible for enabling long-term storage and retrieval of data

       A web server that implements the MVC design pattern

o   The controller receives requests from the browser and forwards them to the model for processing. It then takes the response from the model and forwards that to the view.

o   The model implements the business logic and interacts with the data store

o   The view receives the processed data from the controller and transforms that into HTML for presentation by the client.

We developed this based on some theoretical conversations that we actually the pre-cursors to the requirements. I’ll repeat them here, since they will form our starting point. Our want users to be able to:

       create an account

       log on and log off of the application

       create a friend request

       accept or reject a friend request

       post an update to their friends

       see the posts from their friends

These initial requirements were required in order to allow us to scope out the major components, but these are normally gathered several weeks or months before the formal design process is started since the business is typically still feeling their way forward at this point.

Documentation Review

To start the requirements gathering, the first step that I take is to gather as much documentation as I can. This may be in the form of a scoping document that is developed between the program manager and the business, API (application program interface) documentation for any other services that the application must interact with, or email conversations and meeting notes.

Any material that I can get is stored in a project drive but I work from hard-copy printout. I will typically read each document (at least) twice.

The first time is intended to give me an idea about the general goals of the document and the perspective that the author is coming from, in case there is an obvious bias.

The second read through is why the hardcopy is critical. This read-through requires a highlighter and is a detailed search for anything that the document is putting forward as either a requirement of the application or an assumption about how the system should work. During this phase, I have a couple of guidelines that I try to adhere to:

Don’t evaluate the requirement against the solution

I have found that it is easy to try to think about how the system will handle each requirement. This is dangerous since it can lead to premature revisions of the design. This task is important, but is handled later in the process.

Avoid looking for mutually exclusive requirements

This is another tricky goal since it seems like such a good idea. The problem with this is that a lot of requirements that might be considered mutually exclusive at first can be reconciled later.

Requirements Analysis

After gathering all of the requirements that I can find, it is now time to analyze the results. In this phase, the requirements that have been gathered are categorized and some initial judgments are made about each requirement. More specifically, every requirement is tagged with the following information:

Source

This is vital information when having discussions about requirements and the context for the requirement is required. It can go a long way to understanding why something was asked for, especially when some requirements have to be modified to manage the scope of the project.

Kind

The “kind” of requirement helps to group requirements by what aspect of the system they are constraining. For example, the need to “create a friend request” is a functional requirement since it is a function that the application must support internally. On the other hand, “retrieve latest tweets from user’s Twitter feed” would be an interface requirement since our system must interact with another system (Twitter in this case).

 

While not a critical step in the process, I like to do this in order to group the requirements mentally and get an idea about where the system needs to be the most sophisticated. An application that has a lot of functional requirements is going to need a strong internal structure to manage the number of functions that it needs to manage. In contrast, if the requirements are heavily slanted toward interfaces, then the design should have a flexible and easily discovered interface to import and export data from other systems.

Verification Method

Software can’t be any better than the degree in which it meets it requirements. The first part of ensuring that the requirements are met is to identify them (this phase), however, this isn’t enough. Somehow, the software needs to be verified to meet those requirements. By identifying a verification method at this point, I, as the designer, communicate my assumptions about who will be responsible for taking the primary responsibility for proving that a function exists. There are four kinds of methods that I generally use:

Analysis

This method means that the requirement will be met systematically be the design. An example of a requirement will be solved in this way would be: “The system will store its data in xyz database”. A requirement like this is fundamental and, normally, reflects the standard process for building software in the organization. There is no need to have a downstream team verify, it will be addressed by the design of the system itself.

Test

I use the “test” method to indicate that I expect that the build team will verify this functionality through their automated test suite. An example of this kind of requirement is: “All posts are stored with a timestamp of when they were created”. This requirement doesn’t imply any user experience and cannot be guaranteed by the system design, so the developers will have to prove that it is going to happen.

Inspection and Demonstration

These two methods are similar in that the quality assurance (QA) team handles them, although they are subtly different.

The inspection method means that the QA team member will execute some sort of test script and then check the results. An example of a method that could be verified in this way is: “When the user enters invalid credentials, then the attempt is written to the application log”.

The demonstration method is, in my opinion, the weakest method for verifying a requirement. It is typically used for requirements that are transient and difficult to capture. An example of this type of requirement is: “The application’s user interface uses the company standard color scheme.”

Risk

Perhaps the most important tag for a requirement is the risk. Identifying the risk allows the rest of the team to know where my concerns about the project lie. This often drives follow-up conversations with the program management team about what the high-risk items are and how they might be remediated. In short, it is a way for me to say that I believe that the requirement is valid, but I’m not sure that I will be able to design a system that complies with it.

Reconciliation

Once all of the requirements have been gathered and analyzed, they are discussed with program management to ensure that they align with their goals. Additionally, any decisions that are required for conflicting or high-risk requirements are either made, or assigned to someone to address. After these decisions are made, the component design is revised, if required, to accommodate the requirements.

Summary

The goal of the requirements gathering stage is to locate all of the goals and wishes that the people involved with an application have. These requirements are then analyzed to determine how easily they will be to implement and, when implemented, how they will be verified. Finally, the requirements are reviewed with the project’s leadership to ensure that all issues are addressed so that the design can proceed smoothly.

With the requirements settled upon and reconciled with the component diagram, we are ready to establish the use-cases. This stage will introduce high level functions that the system will provide and which components will interact to provide them.

My Design Process – Component Design

In my last post, I outlined my current process for developing a software application. I don’t believe that this is the best strategy for all circumstances, but it is the best that I have found for me personally in the software environment that I work.

 

I am designing software for a large retail organization for consumption by field personnel. This introduces interesting challenges that, in my opinion, make many modern software practices impractical. The fact that every change needs to be coordinated with training and documentation updates makes it advantages to throttle the release cycle into a few, fairly large deployments throughout the year.

Stages

Currently, my design process flows through the following stages

Component Design

This stage involves the specification of the high-level components that make up the proposed software system and describes, very generally, how they will interact.

Requirements Gathering

In the requirements gathering stage, all available documentation from the business owners are gathered and recorded. This often includes conversations with program management to ensure that each requirement is understood.

Use-Case Design

The requirements are then distilled down into individual use-cases that the software system will implement in order to meet the requirements. Also, the components that are likely involved in the use-case are identified.

Activity Diagrams and Wireframing

This stage involves the creation of activity diagrams (aka flow charts) to show how the user and system components will actually interact to implement each use-case. Also, since the interaction of the user and the system are starting the be specified, the structure of the user-interface (aka wireframes) are created at this time as well.

Sequence Diagrams

The final, and most time intensive stage, is the creation of sequence diagrams. These diagrams contain the detailed information about how the system operates to implement the process illustrated by the activity diagrams.

Component Design

In this post, I’d like to dive a bit deeper into the stage where I create the component design. As I mentioned above, this is the part of the design process where I start to identify the high-level components in the system and how they interact. To illustrate this process, let’s design a hypothetical software system. In the spirit of “go big or go home”, let’s go ahead and design Facebook. How hard could it be?

Okay, we aren’t going to design all of Facebook, but let’s take a piece of it. For our study, let’s consider only the parts of Facebook that allow users to do the following things:

–       create an account

–       log on and log off of the application

–       create a friend request

–       accept or reject a friend request

–       post an update to their friends

–       see the posts from their friends

 

So, we have a general idea of what we want to do. So how do we start to create the application?

 

Let’s start with the basic components that are going to be needed.

1)   The user is going to have to have someway to enter information into the application and get information back out, so we need some sort of user interface.

2)   The application needs to be able to store information about user accounts, their friends lists, and the posts that they make.

3)   We also need to be able to show one person’s posts to their friends.

 

There are three basic types of application that we can build: client only, server only, or client-server. If we choose a client only application, then we can meet the first two requirements, but sharing would be difficult since we would need to find a way for the client applications to communicate with each other. On the other hand, a server only solution would allow us to meet the second requirement, but not the first or last since we wouldn’t have a user-interface. Therefore, we need some sort of client-server application.

 

Just like there are several types of applications, there are several sub-types of client-server. I’m going to cut to the chase here. Web-applications are the dominant strategy for solving this problem today, so we’ll go that direction. This does leave open the question about the “client” since that can be a web-browser, mobile device, or client-installed application on a desktop. Since web-browsers allow us to hit the widest number of potential users, let’s go with that.

 

So far then, we have the following components:

1

Notice the line between the web browser and the web server. This means that they are allowed to know about each other and talk to each other. We’ll see how that this bi-directional communication will be pretty rare, and why that is a good idea.

 

So far, so good, but we don’t have a way to actually store the data the user accounts, posts, etc. While we could do that in the memory of the web server, that isn’t really a good idea since we could never update or maintain our application. Naturally, the best way to store this data is in some kind of persistent store. Since we don’t have to make a decision on the format, let’s just add a general description like this:

2

We have added a “Data Store” that the web server can talk to, but it can’t talk back. While many database technologies would, in fact, allow bi-directional communication, this approach tends to cause functionality to mix over time. If the Data Store can’t initiate communication with the web-server, then it is easier to isolate from the rest of the system. Generally, this lowers the learning curve of the application.

 

We have the high-level components that we need, but there is more that we can do in this stage of design. Specifically, how are the sub-applications in the web browser and web server going to work?

 

A good default solution presents itself for the web server, so let’s start there. By far the most popular way to structure a web server application today is by following the model-view-controller (MVC) design pattern. This pattern is supported my many application frameworks due to its flexibility and tendency to drive the application to be well-structured. The components of the MVC design pattern are:

–       The controller, which is responsible for receiving a request from the client (e.g. the web browser) and interacting with the model to process that request

–       The model is the part (or layer) that is responsible for actually processing the requesting and performing any work that is required. This includes returning any data that is required to honor a request for information that the client has made.

–       The view takes the result from the model and converts into a format that the client can use (e.g. an HTML page for a web-browser to show to the user).

We could take time and create a custom design that might be better or our application, but one of the core tenants of design is use standards whenever possible. This design pattern is adequate and well known by web-application developers. This makes it attractive since the ability to understand and maintain the application will be enhanced by its adoption.

 

Incorporating this decision, we now have this design:

3

Several things have changed by adding the detail into the web server component. We now have a clear parent-child linkage for all of the components, except for the web browser (which still sends and receives from the web server). Within the web server, the view and model are nice and isolated since they only have one input and one output. The controller is a bit heavier since it is responsible for interacting with two other layers (the view and the model). This level of awareness is going to make it tempting to do a lot of work here, since it knows about the rest of the system. We are going to have to keep this in mind as we move forward. The best way that I’ve found to counter this high level of awareness is to push to do as little work as possible in the controllers. They have enough to do in just coordinating between the web browser, model, and views. Let’s not through more responsibilities there by adding presentation logic (the job of the view) or business logic (the responsibility of the model). We’ll see how this interaction plays out as the design progresses.

 

As a final step, we need to think about the web browser. It currently has awareness of the server and the server is aware of it. In the past, this was managed by ensuring that the browser had a very simple job. It was presented with HTML from the server and then it posted data back to the server in the form of a new location to browse to and, optionally, some data that the user provided (e.g. username and password when logging in). Our application is currently setup for this kind of design. An alternative is to create what is called a “rich client” in which there is an application running there as well. If we do that, we can get a design like this one:

4

In short, we have added another MVC application into the design. The model of the web browser plugs in to the server. This doesn’t eliminate the fact that we have one component responsible for bi-directional communication. In general, it actually increases the complexity of the application. The advantage of this design is that, since we have an application running on the browser, we can shift some responsibility from the server to the client, simplifying it. It also gives us the ability to improve the user experience by allowing us to reduce the number of times that we have to completely reload the page. However, we don’t have a requirement for that now, and it might make it more difficult to understand how the rest of the design process flows. As a result, let’s stick with the previous version where we have a rich-server and a thin-client.

 

Stay tuned for the next post where I’ll walk through how I capture requirements and what I get from that step.

My Design Process

Over the last couple of years, I have been asked to design several large software systems. While each project has been radically different, I have found myself standardizing on a design process that, at least for me, seems to work regardless of the problem that I am trying to solve. I thought that I would take the opportunity to write up my process for two reasons: first, so that I can check back in on this in a couple of years and see how much I have changed and second, to share with developers that are stepping into a design role and might find my process to be a useful starting point.

Overview

In general, I design an application in a series of stages. Generally, the phases are sequential, but there is often a great deal of looping back that happens toward the end of the design process as early assumptions are proven to be incorrect or additional requirements are revealed by the design process.

While this process may not be the poster child for “agile” design processes, it does give us the flexibility to employ a wider array of skill levels during the build phase than would be possible if every developer was responsible for designing as they went. Also, since it is the product of one person (supported by many others), this process tends to lead to designs that are more cohesive than other methods that I have tried in the past.

Stages

Component Design

This is often the first step of my process and can preceed the others by weeks or even months. In this phase, I try to lay down the high level actors in the system and how they might interact to provide the desired functionality. This stage is often kicked off when a project is still being proposed and is used to help the customer understand how we might solve their problem. It can also help inform budgetery estimates by identifying what software platforms and addition infrastructure might be required to support the design.

Requirements Gathering

I know that this is often seen as the realm of a Business Analyst, but as a software engineer, I have found this to be a critical step for me as well. Even if the requirements are known by someone within the development team, I need to internalize the requirements so that I can ensure that the design meets each of those. This phase is also critical for helping me to identify requirements that might be mutually exclusive so that I can bring up the issues to Program Management as quickly as possible

Use-Case design

After the requirements have been identified, I move into creating simple use-case diagrams. While I am still exploring these diagrams to see how best to use them, I think I’m getting close. Basically, I use the use-case diagrams to identify the use cases that will meet the requirements (identified above) and what members of the system (from the component diagram) will probably need to take part in filling that use case.

Activity Diagrams and Wireframing

This is the phase where the components are joined together with messages to show how each use case will be implemented. I know that wireframes are the realm of the graphic designer, but I cannot find a way to decouple these in my design process. I use the activity diagrams to show how information flows through the system, including points where people need to provide input. The wireframes allow me to communicate to the graphic design team what information I envision being presented to the user and what actions I need them to be able to take. The structure and style of that presentation is not decided here, only the content itself. I have found that this provides the graphic designers an excellent starting point when they being the “real” wireframes later on.

Sequence Diagrams

Depending on who will be writing the software, this step may, or may not be taken. The intent of this stage is to create the detailed design of the structure of the components (e.g. packages, classes, methods, …) and how they will interact. This is, by far the longest phase to execute since almost every member of the application will be created via UML tools. This does, however, provide excellent guidance to external build teams, when they are being used. It also ensure that design patterns and company standards are adhered to.

Conclusion

Over the next few weeks, I plan on expanding on each of these topics to explain in more detail what work I do in each stage with some example of the output. Stay tuned…