Skip directly to content

Smashing Magazine

Subscribe to Smashing Magazine feed
For Professional Web Designers and Developers
Updated: 1 hour 33 min ago

Efficiently Simplifying Navigation, Part 3: Interaction Design

Thu, 09/18/2014 - 13:13

Having addressed the information architecture1 and the various systems2 of navigation in the first two articles of this series, the last step is to efficiently simplify the navigation experience — specifically, by carefully designing interaction with the navigation menu.

When designing interaction with any type of navigation menu, we have to consider the following six aspects:

  • symbols,
  • target areas,
  • interaction event,
  • layout,
  • levels,
  • functional context.

It is possible to design these aspects in different ways. Designers often experiment with new techniques3 to create a more exciting navigation experience. And looking for new, more engaging solutions is a very good thing. However, most users just want to get to the content with as little fuss as possible. For those users, designing the aforementioned aspects to be as simple, predictable and comfortable as possible is important.


Users often rely on small visual clues, such as icons and symbols, to guide them through a website’s interface. Creating a system of symbolic communication throughout the website that is unambiguous and consistent is important.

The first principle in designing a drop-down navigation menu is to make users aware that it exists in the first place.

The Triangle Symbol

A downward triangle next to the corresponding menu label is the most familiar way to indicate a drop-down menu and distinguish it from regular links.

A downward triangle next to the menu label is the most reliable way to indicate a drop-down. (Source: CBS5) (View large version6)

If a menu flies out, rather than drops down, then the triangle or arrow should point in the right direction. The website below is exemplary because it also takes into account the available margin and adjusts the direction in which the menu unfolds accordingly.

A triangle or arrow pointing in the right direction is the most reliable way to indicate a fly-out menu. (Source: Currys8) (View large version9)

The Plus Symbol

Another symbol that is used for opening menus is the plus symbol (“+”). Notice that the website below mixes symbols: an arrow for the top navigation menu and a “+” for the dynamic navigation menu to the left (although an arrow is used to further expand the dynamic menu — for example, to show “More sports”).

Some websites use a “+” to drop down or fly out menus. (Source: Nike11) (View large version12)

Mixing symbols can be problematic, as we’ll see below. So, if you ever add functionality that enables users to add something (such as an image, a cart or a playlist), then “+” would not be ideal for dropping down or flying out a menu because it typically represents adding something.

The Three-Line Symbol

A third symbol often used to indicate a navigation menu, especially on responsive websites, is three horizontal lines.

Three horizontal lines is frequently used for responsive navigation menus. (Source: Nokia14) (View large version15)

Note a couple of things. First, three lines, like a grid16 and a bullet list17 icon, communicate a certain type of layout — specifically, a vertical stack of entries. The menu’s layout should be consistent with the layout that the icon implies. The website below, for example, lists items horizontally, thus contradicting the layout indicated by the menu symbol.

Three lines do not work well if the menu items are not stacked vertically. (Source: dConstruct 201219) (View large version20)

The advantage of the more inclusive triangle symbol and the label “Menu” is that they suit any layout, allowing you to change the layout without having to change the icon.

Secondly, even though three lines are becoming more common, the symbol is still relatively new, and it is more ambiguous, possibly representing more than just a navigation menu. Therefore, a label would clarify its purpose for many users.

An accompanying label would clarify the purpose of the three lines. (Source: Kiwibank22) (View large version23)

Consistent Use Of Symbols

While finding symbols that accurately represent an element or task is important, also carefully plan their usage throughout the website to create a consistent appearance and avoid confusion.

Notice the inconsistent use of symbols in the screenshot below. The three lines in the upper-right corner drop down the navigation menu. The three lines in the center indicate “View nutrition info.” The “Location” selector uses a downward triangle, while the “Drinks” and “Food” menus, which drop down as well, use a “+” symbol.

Inconsistent symbols lead to confusion. (Source: Starbucks25) (View large version26)

While using multiple symbols for a drop-down menu is inconsistent, using arrows for anything other than a drop-down menu causes problems, too. As seen below, all options load a new page, rather than fly out or drop down a menu.

Using a triangle or arrow for anything other than a drop-down or fly-out menu can cause confusion. (Source: Barista Prima28) (View large version29)

This leads to a couple of problems. First, using arrows for regular links — whether to create the illusion of space30 or for other reasons — puts pressure on you to consistently do the same for all links. Otherwise, users could be surprised, not knowing when to expect a link to load a simple menu or a new page altogether. Secondly, a single-level item, such as “Products”, could conceivably be expanded with subcategories in the future. A triangle could then be added to indicate this and distinguish it from single-level entries, such as the “About” item.

Users generally interpret an arrow to indicate a drop-down or fly-out menu. And they don’t have any problem following a link with no arrow, as long as it looks clickable. It is best not to mix these two concepts.

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30

The post Efficiently Simplifying Navigation, Part 3: Interaction Design appeared first on Smashing Magazine.

Why Companies Need Full-Time Product Managers (And What They Do All Day)

Wed, 09/17/2014 - 12:54

What is a product manager? What do product managers do all day? Most importantly, why do companies need to hire them? Good questions.

The first confusion we have to clear up is what we mean by “product.” In the context of software development, a product is the website, application or online service that users interact with. Depending on the size of the company and its products, a product manager could be responsible for an entire system (such as a mobile app) or part of a system (such as the checkout flow on an e-commerce website across all devices).

This is confusing because, in most contexts, a product is a thing you sell to people. Particularly in e-commerce, product managers often get confused with category managers, which are the team that sources and merchandises the products sold on an e-commerce website. So, yes, “product” probably isn’t the best word for it. But it’s what we’ve got, and it’s the word we’ll refer to in exploring this role.

To define the role of a product manager, let’s start by looking at Marc Andreessen’s view of the only thing that matters in a startup1:

The quality of a startup’s product can be defined as how impressive the product is to one customer or user who actually uses it: How easy is the product to use? How feature rich is it? How fast is it? How extensible is it? How polished is it? How many (or rather, how few) bugs does it have?

The size of a startup’s market is the the number, and growth rate, of those customers or users for that product.…

The only thing that matters is getting to product-market fit. Product-market fit means being in a good market with a product that can satisfy that market.

Even though Andreessen wrote this for startups, the importance of that last sentence about product-market fit holds truth for every organization — whether the organization is getting a new product to market or redesigning an existing experience or anything in between. It is a universal road map to success, and it is the core of what product managers are responsible for.

With that as the backdrop, my definition of the role of a product manager would be to achieve business success by meeting user needs through the continual planning and execution of digital product solutions.

This definition summarizes all of the things that a product manager needs to obsess over: the target market, the intricacies of the product, what the business needs in order to succeed, and how to measure that success. It also encapsulates the three things that a product manager should never lose sight of:

  • The ultimate measure of success is the health of the business and, therefore, the value that the product provides to users.
  • Everything starts with a solid understanding of the target market and its needs, so that the focus remains on the quality of the product experience.
  • A continual cycle of planning and execution is required to meet these market needs in a sustainable way.

So, how does this translate to what a product manager does every day? That question is way too big to answer here, but as an introduction, Marty Cagan has a great list of common tasks that product managers are responsible for in his ebook Behind Every Great Product2 (PDF). The tasks include:

  • identifying and assessing the validity and feasibility of product opportunities,
  • making sure the right product is delivered at the right time,
  • developing a product strategy and road map for development,
  • leading the team in executing the product’s road map,
  • evangelizing the product internally to the executive team and colleagues,
  • representing customers through the product development process.

But before a product manager is able to do these things, a couple of awkward questions have to be asked. First, do companies really need product managers? And, if we can agree on that, what are the characteristics of a good one? Also, where does this role fit in an organization’s structure? Let’s explore these questions.

Why Companies Need Product Managers

The role of product manager can be a hard sell for some companies. Common objections to it include:

  • “We have various people in the organization whose roles fulfill each of these functions.”
  • “I don’t see how the role would make us more money.”
  • “Product managers would just slow us down.”
  • “I don’t want to relinquish control of the product to someone else.” (OK, this one is not usually said out loud.)

These appear to be valid concerns, but only if the role is not well understood — or if the organization has bad product managers who perpetuate these perceptions.

The truth is that, to be effective, the role of a manager for a particular product or area must not be filled by multiple people. It is essential for the product manager to see the whole picture — the strategic vision as well as the details of implementation — in order to make good decisions about the product. If knowledge of different parts of the process resides in the heads of different people, then no one will have that holistic view, and all value will be drained of the role.

Let’s look at two major benefits that product managers bring.

Product Managers Ensure a Market-Driven Approach

The key argument in favor of product managers is that they help companies to be driven by the needs and goals of the target market, not the forces of technology or fads. As Barbara Nelson puts it in “Who Needs Product Management?”3:

It is vastly easier to identify market problems and solve them with technology than it is to find buyers for your existing technology.

If done right, a market-driven focus results in long-term, sustainable, profitable business, because the company will remain focused on solving market problems, as opposed to looking for things to do with the latest technologies. A market-driven focus is important because companies that have one are proven to be more profitable than those driven by other factors (31% more profitable, according to George S. Day and Prakash Nedungadi4).

This doesn’t mean focusing on incremental change to the exclusion of product innovation. Identifying market problems is about not only finding existing issues to improve (for example, “60% of users drop off on this page, so let’s fix that”), but also about creating new products to satisfy unmet needs (“Cell phones suck — let’s make a better one”).

Product Managers Improve Time-to-Everything

The second major benefit of product managers is that they reduce the time an organization takes to reach its goals. A well-defined and appropriate product development process run by effective managers will improve both the time-to-market as well as the time-to-revenue.

The reason for the faster turnaround time is that a product manager is responsible for figuring out what’s worth building and what’s not. This means less time spent on the spaghetti approach to product development (throwing things against the wall to see what sticks) and more time spent on building products that have been validated in the market. This approach also sharpens the organization’s focus, enabling the organization to dedicate more people to products that are likely to succeed, instead of spreading people too thin on projects that no one is sure will have a product-market fit.

Characteristics Of A Good Product Manager

Now that we’ve covered the importance of product managers, the next question is, “Who are these people?”

Most of us are familiar with the idea of T-shaped people: those who have deep knowledge in one or two areas, with a reasonable understanding of a variety of disciplines related to their main field of focus. In 2009, Bill Buxton wrote an interesting article for Businessweek in which he calls for more “I-shaped” people5:

These have their feet firmly planted in the mud of the practical world, and yet stretch far enough to stick their head in the clouds when they need to. Furthermore, they simultaneously span all of the space in between.

This is a good description of the unique blend of skills that product managers need. First, they need to have their head in the clouds. They need to be leaders who can look into the future and think strategically. They need to be able to develop a vision of where a product should go, and they need to be able to communicate that vision effectively. Furthermore, product managers need to show their teams how they plan to get to that vision. And I do mean show: through sketches, prototypes, storyboards, whatever it takes to get the message across. They also need to be flexible and be able to change course when needed; for example, when market needs or expectations shift substantially or a great business opportunity presents itself.

But a good product manager also has their feet on the ground. They pay attention to detail, and they know the product inside out. They are the product’s biggest user — its biggest fan and critic. They understand every aspect of the complexity that needs to be worked through in each product decision. And they’re able to make those decisions quickly, based on all of the information at their disposal.

Most importantly, a product manager knows how to ship. They know how to execute and rally a team to get products and improvements out into the world, where the target market can use it and provide feedback.

I-shaped people (View large version7)

In short, a product manager is a visionary as well as a doer, a manager as well as a maker. And they need to move seamlessly between those extremes, sometimes at a moment’s notice. Sound difficult? That’s only the beginning. Let’s look at some more characteristics of a good product manager.

Leader and Collaborator

Being a leader and a collaborator at the same time is a difficult balance to strike. The first challenge is that collaboration is often mistaken for consensus. That’s not the case. Consensus cultures often produce watered-down, unexciting products, products whose endless rounds of give-and-take have worn down the original idea to a shadow of what it was. Consensus cultures also wear down the teams working on the product, because they don’t really get what they want, only some of it.

Collaboration is different. In collaboration cultures, people understand that, even though everyone has a voice, not everyone gets to decide. People are free to air their opinions, argue passionately for how things should be done, and negotiate compromises. But that certainly doesn’t mean that everyone has to agree with every decision.

The first step to building a collaboration culture is to have a good leader. As you’ve probably surmised by now, the product manager is the ultimate decision-maker. But that only works if they are a trusted and respected leader in the organization, someone who can get the team excited about the vision, as well as make decisions that are best for the company and its customers. A good leader also readily admits when they have made a wrong decision, and they own up to it and do whatever they can to fix the mistake.

This isn’t a post about leadership — there are plenty of those to go around. But I’ll still share one piece of leadership advice8 from French writer and aviator Antoine de Saint Exupéry that has helped me over the years:

If you want to build a ship, don’t drum people up together to collect wood, and don’t assign them tasks and work. Rather teach them to long for the endless immensity of the sea.

What does “the endless immensity of the sea” mean in your context? Instead of telling people to build a bunch of features, how can you inspire them to think about how the product will help users accomplish their goals? That’s how you’ll be able to unite teams around a common vision.

So, how does a good leader foster this kind of collaboration culture? By creating an environment and creating processes that allow collaboration to feed on itself, and by understanding that every person is different and will react unpredictably at some point.

To create the right environment and processes for collaboration, focus on the physical environment first. Make sure that physical workspaces allow team members both to have impromptu discussions with each other and to shut out all distractions and focus on work for a period of time. The MailChimp office is a great example of this. The team created a collaborative workspace9 based on the following principles:

  • Commingle and cross-pollinate
    Instead of segregating teams, mix people up according to their personalities and the projects they’re working on. This will lead to valuable discussions that might not have happened if everyone was stuck in their own silo.
  • Facilitate movement
    Open desks, couches, standing tables: these are all elements that encourage people to move around and work together when needed.
  • Ideas everywhere
    Cover walls and whiteboards with sketches, designs, prioritization lists and road maps. This will not only contribute to better communication, but also leave the door open for anyone to improve ideas that others are working on.
  • Create convergence
    A common space for lunch (and coffee!) is important because it will allow people to run into each other, even people who don’t normally work together on projects. Again, this can lead to great ideas and perspectives.
  • Create retreats
    The hustle and bustle of collaboration spaces has great energy, but it is sometimes distracting. Individuals and teams occasionally need a quiet space to work, so make sure they have meeting rooms or quiet retreats that prevent any interruption.

Workspaces are more important than we might think. We went to great lengths to create a welcoming, creative space at the studio I used to work at, and the effort is paying off. Most clients prefer to come to us for meetings, and they cite two reasons: the excellent coffee (we went a little overboard on the coffee) and the great atmosphere to work in.

Steve Jobs understood the value of physical spaces very well. He is quoted in Walter Isaacson’s biography as saying this about the design of Pixar’s new campus:

If a building doesn’t encourage [collaboration], you’ll lose a lot of innovation and the magic that’s sparked by serendipity. So we designed the building to make people get out of their offices and mingle in the central atrium with people they might not otherwise see.

Of course, physical space is only one part of the equation. A lot of work happens remotely now, and we have enough tools at our disposal to make that an effective and rewarding experience for everyone involved. From communication tools like Campfire, HipChat and Slack to collaborative project-management tools like Trello, Basecamp and Jira to source-code repositories like GitHub and Bitbucket, we have no excuse anymore to force everyone to be in the same physical space at all times. There is still much value in talking face to face and in collaborating during certain stages of the process, but even that can happen in digital spaces.

So, what’s next after you’ve worked on the physical and digital environments? Next is a feared word. Many people think “process” is synonymous with “things I have to do instead of working.” But a lot of appropriate, or “right-fidelity,” processes are possible. To quote Michael Lopp10: “Engineers don’t hate process. They hate process that can’t defend itself.” When it comes to creating a culture of collaboration, several processes — defendable processes — can make life easier for the whole team.

One essential process to get right is regular feedback sessions on design, development and business decisions. The challenge is that feedback sessions can get out of hand quickly, because we’re just not very good at providing (or getting) feedback. We are prone to seeing the negative elements of someone’s ideas first, so we often jump right into the teardown. This puts the person on the receiving end in a defensive mode right away, which usually begins a spiral down into unhelpful arguments and distrust.

There is a better way. In an interview on criticism and judgment, French philosopher Michel Foucault laid out the purpose of any good critique11. In his view, criticism should focus not on what doesn’t work, but on how to build on the ideas of others to make them better:

I can’t help but dream about a kind of criticism that would try not to judge but to bring an oeuvre, a book, a sentence, an idea to life; it would fight fires, watch grass grow, listen to the wind, and catch the sea foam in the breeze and scatter it. It would multiply not judgements but signs of existence; it would summon them, drag them from their sleep. Perhaps it would invent them sometimes — all the better. Criticism that hands down sentences sends me to sleep; I’d like a criticism of scintillating leaps of the imagination. It would not be sovereign or dressed in red. It would bear the lightning of possible storms.

Keeping this purpose in mind, let’s turn to the process used by Jared Spool12 and his team at User Interface Engineering. The team uses this process specifically for design critiques, but it could be applied to any kind of feedback session:

  1. The person presenting their idea or work describes the problem they are trying to solve.
  2. If everyone agrees on the problem, the team moves on. If there is not agreement on the problem to be solved, some discussion is needed to clarify. Hopefully, this step isn’t needed, though.
  3. Next, the presenter communicates their idea or shows their work to the team. The goal is not only to show the finished product, but to explain the thought process behind it. The presenter should remain focused on how the idea will solve the problem that everyone has agreed on.
  4. The first step in giving feedback is for the people in the room to point out what they like about the idea. This isn’t a ruse to deliver a crap sandwich (you know, start and end with something positive and eviscerate in the middle). Rather, this step highlights which approach to the problem is desirable.
  5. Critique ensues, not as direct attacks or phrases such as “I don’t like…,” but as questions about the idea. Team members will ask whether a different solution was considered, what the reason was for a particular choice and so on. This gives the presenter a chance to respond if they’ve thought through the issue already, or else to make a note to address it for the next iteration.
  6. At the end of the meeting, the team reviews the notes, especially what everyone liked and what questions they had. The presenter then goes away to work on the next iteration of the idea.

As the product manager, you are responsible for making sure that feedback sessions happen and that they are respectful and useful.

The goal of collaboration is for participants to make ideas better by building on the best parts of different thoughts and viewpoints. As long as people trust that the decision-maker (that’s you, dear product manager) has the best interests of the product and company at heart, then they won’t have a problem not getting their way every once in a while. Be confident, trustworthy and decisive — and make sure that everyone feels comfortable raising their opinion with the team.

All of this is much easier said than done, of course. Product managers need to steer the team through the collaboration process, and sometimes the trust just won’t be there in the beginning. That’s OK — trust takes time. Live these values, lead by example, and the culture will come.

Communicator and Negotiator

A more accurate label for this section might be “Overcommunicator and Negotiator,” because if there’s one thing a product manager never gets tired of, it’s telling people what’s happening. But instead of sending a ton of email, a better way is to work in the open as much as possible. Make sure that notes, sketches, plans and strategies are all accessible to everyone in the company at all times. This could take the form of whiteboards that are placed across the office or in a company wiki or in project spaces. Working out in the open has the added benefit of giving context to conversations: All comments and decisions will be in one place, instead of spread out over multiple emails (or, worse, in meetings where no one remembers to take notes).

Being a product manager sometimes feels like you’re being torn limb from limb. Most stakeholders have only their own department’s interests at heart (as they should — they’re paid to fight for what they care about). In contrast, a product manager needs to negotiate the best solution from all of the different directions that stakeholders want to take, and then communicate that decision effectively and without alienating people who don’t get their way. That’s not an easy job.

What product management sometimes feels like (Image: central panel of “Martyrdom of St Hippolyte” triptych, Dieric Bouts, c1468)

The design community has a phrase to refer to the difficult process of managing the expectations (and assertions) of a variety of stakeholders: design by committee. Like consensus culture, decision-by-committee cultures are pervasive, particularly in large organizations. I’ve always liked the approach that Speider Schneider proposes in his article “Why Design-By-Committee Should Die13”:

The sensible answer is to listen, absorb, discuss, be able to defend any design decision with clarity and reason, know when to pick your battles and know when to let go.

This is not as easy as it sounds. So, over time, I’ve developed the following guidelines to deal with decision-by-committee in a systematic way.

Respond to Every Piece of Feedback

Responding to every demand, criticism, question and idea takes time. But failing to respond will waste even more time and energy down the road. Someone listening to another person’s idea and deciding not to use it is one thing. Someone not even listening is something else entirely. Instead of dealing with the political ramifications of not hearing people out, take the time to respond thoughtfully whenever someone offers feedback or an idea (no matter how unfeasible).

Note What Feedback Is Incorporated

When you implement a good idea, don’t do it quietly. It’s an opportunity to show that you’re flexible and open to good feedback. Let people know when and how their ideas are being used. Also, this should go without saying, but don’t take credit for someone else’s idea.

When Feedback Is Not Incorporated, Explain Why

Most of the feedback you’ll receive can’t realistically be incorporated into the product. Don’t sweep those decisions under the rug. By forcing yourself to be clear and straightforward about which feedback won’t be incorporated, you’ll also force yourself to think through the decision and defend it properly. Sometimes you’ll even realize that what you initially dismissed as a bad idea would be an improvement after all. People are generally OK with their feedback not being used, as long as they know that they’ve been heard and that there’s a good reason for the decision.

Use a Validation Stack to Defend Decisions

In their book Undercover User Experience Design14, Cennydd Bowles and James Box explain the user experience validation stack, yet another method that can be used to defend product decisions. When defending a decision, always try to cite user data as evidence, such as usability testing and website analytics. If you don’t have direct access to user data, look for research — either research you’ve done or third-party research into related areas. If all else fails, fall back on theory. The principles of visual perception, persuasion, psychology and so on could be very handy in explaining why you’ve made certain decisions.

These guidelines should make it easier to negotiate different needs and requests from internal stakeholders. But remember Speider’s recommendation in his article: Pick your battles, and know when to let go. That’s the art of being a good negotiator and communicator.

Passionate and Empathetic

Product managers love and deeply respect well-designed, well-made products, both physical and digital. And they live to create such products. They are the people who go to parties and can’t shut up about a new website or app or, more likely, can’t shut up about how cool what they’re working on is.

They’re passionate not only about product, but about users, too. They understand the market well: their customers’ values, priorities, perceptions and experiences. Passion for product is useless without empathy for its users. Building a great product is not possible without geting into the minds of the people who will use it. If we want to anticipate what people want and guide them along that path, then empathy is non-negotiable.

Qualified and Curious

Product managers usually come from specialist backgrounds, such as user experience design, programming and business analysis. To apply their specialized knowledge to this new field — in other words, to become more I-shaped — they will need to be able to learn new skills very quickly (and under great pressure). Insatiable curiosity is a prerequisite for this ability to learn quickly. Why? Cap Watkins puts it well15:

If you’re intensely curious, I tend to worry less about those other skills. Over and over I watch great designers acquire new skills and push the boundaries of what can be done through sheer curiosity and force of will. Curiosity forces us to stay up all night teaching ourselves a new Photoshop technique. It wakes us up in the middle of the night because it can’t let go of the interaction problem we haven’t nailed yet. I honestly think it’s the single most important trait a designer (or, hell, anyone working in tech) can possess.

A good product manager does whatever it takes to make a product successful. They constantly worry about the tiniest of details, as well as the biggest of strategy questions. Rather than feeling overwhelmed by the sheer amount of what needs to be done, their curiosity pushes them to remain committed and to become as qualified as possible to make the right decisions.

Trustworthy and Ethical

A good product manager inspires trust in their team with every decision they make. To be trustworthy, they need to be fair (more on this later) and consistent, and they need to always take responsibility for their decisions. They also have to admit when they’re wrong, which is difficult at the best of times.

On the one hand, a product manager needs to be confident in the decisions they make. They need to constantly learn and grow and hone their craft. Theory and technique need to become so ingrained that they become second nature, the cornerstone of everything they do.

On the other hand, they need to be open to the fact that some of their decisions will be wrong. In fact, they need to welcome it. They should hang on to a measure of self-doubt every time they present a solution to the team or to the world. Admitting that someone else’s idea is better than yours and making changes based on good criticism will do wonders for improving the product — and it will build trust among the team. John Lilly phrases what should be a mantra for all product managers: “Design like you’re right; listen like you’re wrong.16

The best product managers are those who are guided by a strong and ethical perspective on the world. An discussion of ethics will only get me into trouble here, but it would be wrong not to at least touch on the subject. In short, we’re not just making products; we are putting a stamp on the world, and we have an opportunity to make the world a better place. Perhaps no one says it better than Mike Monteiro in Design Is a Job17:

I urge each and every one of you to seek out projects that leave the world a better place than you found it. We used to design ways to get to the moon; now we design ways to never have to get out of bed. You have the power to change that.

How do we identify projects and problems that fit these criteria? One way is to watch out for what Paul Graham calls “schlep blindness18”: our inability to identify hard problems to solve, mostly because we’re just not consciously looking for them. Paul’s advice to combat this? Instead of asking what problem you should solve, ask what problem you wish someone else would solve for you.

Another great source of ideas for worthy projects is the field of social entrepreneurship (i.e. pursuing innovative solutions to social problems). Meagan Fallone has a great overview19 of the nature and importance of this type of work:

We in turn can teach Silicon Valley about the human link between the design function and the impact for a human being’s quality of life. We do not regard the users of technology as “customers,” but as human beings whose lives must be improved by the demystification of and access to technology. Otherwise, technology has no place in the basic human needs we see in the developing world. Sustainable design of technology must address real challenges; this is non-negotiable for us. Social enterprise stands alone in its responsibility to ensuring sustainability and impact in every possible aspect of our work.

The book Wicked Problems20 is a great source of ideas on how to put our effort towards meaningful work.

Of course, people define socially important work differently. That’s OK — what’s important is to think it through and to clearly delineate the work you want to be involved in.

Responsible and Flexible

To garner sympathy from others, product managers like to say that the most difficult part of their job is that they have all of the responsibility but none of the authority. In other words, even though product managers are responsible for the success and failure of their products, no one normally reports to them. This is why good communication and collaboration skills are so crucial.

The danger of all having all of the responsibility for a product is rigidity: not letting go of tasks that could easily be delegated and stubbornly sticking to the plan when circumstances have changed. That’s why product managers must remain flexible. Planning is critical, and an essential part of planning is allowing for the right information to change the plan if needed.

This need for flexibility can unnerve some product managers, but it’s a necessary part of the process of building a great product. So, get comfortable with ambiguity. This job has a lot of it.

In Fairness

This one is the most important characteristic of a product manager — the one that rules them all. I once had a discussion with a colleague in our development team about the development process for new products that we had rolled out a few months before. One of the words he used to describe the new process is “fair.”

It was a passing comment, and I didn’t really think much of it at the time, but since then I’ve kept going back to that conversation and the importance of fairness in product management. All of the characteristics I’ve talked about are great, but fairness is the one that a product manager simply cannot do without.

Let’s look at one definition of the word and consider what it means in product management:

fair (adjective)

free from favoritism or self-interest or bias or deception.

Free From Favoritism

One of the fastest ways for a product manager to become ineffective is to play favorites with a team, product line or user base. As soon as people sense that you are not looking at all ideas equally and fairly, their trust in you will inevitably erode. And without trust, you’ll have to work a lot harder (and longer) to get people to follow your road map.

Free From Self-Interest

If you start doing things purely for reasons like “Because I want to” or “Because my performance is being measured this way,” then trust will erode again. You cannot be effective by nursing your pet projects and ignoring the needs around you.

Free From Bias

This often happens when product managers receive news they don’t want to hear, especially from the user research or analytics teams. If something doesn’t test well, don’t make up reasons why you are right and users are wrong. Do the right thing and realign the design.

One of the hardest skills for a product manager to learn is to take their own emotions and feelings out of the equation when making decisions. Yes, a lot of gut feeling goes into a product vision, but that should not be based on personal preference or preconceived ideas. This is much easier said than done, but it’s something to strive for and to be aware of at all times.

Free From Deception

This one seems obvious, but you see it often, especially with metrics and assessment. Don’t ignore or distort negative data or blame a problem on someone else. Your job is to own the product, and this means owning its successes and its failures. You’ll gain trust and respect only if you acknowledge the failures as much as the successes and commit to doing better next time.

A product manager is often referred to as “the great diplomat,” and with good reason. Our responsibility is to balance the variety of needs from inside and outside the company and to somehow turn that into a road map that generates business value and meets user needs. A focus on fairness will help to accomplish that goal:

  • Fairness to users
    Approach users with respect, openness and transparency. Understand their needs, and explain to them why you might need to do something that will make it more difficult for them to meet those needs.
  • Fairness to the company
    Do everything you can to understand the needs of marketing, merchandising, customer support and other departments. Pull them into the planning process; be clear about how projects are prioritized; and help them adjust to that process so that they can define their project goals in a way that gets them on the road map.
  • Fairness to technology
    Don’t try to force the development team to make the product’s technology do things it’s not capable of doing. Understand the technical debt in the organization, and work actively to make those improvements a part of regular development cycles.

A lot of this comes naturally with good product managers, but we need to be conscious of it every day. Fairness is a prerequisite to building great products. If you’re not fair, you’ll be dead in the water, working with a team that has no reason to trust that you’re doing the right thing.

A Prerequisite For Success

One last topic needs to be addressed. An organization can hire the best product managers in the world and implement the best development processes, but it will still fail if one prerequisite for success is not met. There needs to be an executive mandate and company-wide understanding that, even though everyone has a voice, decisions about product ultimately rest with the product manager.

This one is hard to swallow for many companies. When I mention this part in training courses on product management, the mood in the room often changes. This is when people start complaining that, even though they see the value in the role, it would never work at their company because team leaders aren’t willing to give up that ultimate control over the product. The strategies for dealing with this warrant another article. For now, here’s what Seth Godin reportedly once said: “Nothing is what happens when everyone has to agree.” The product manager is there to make sure things happen — the right things.

When everyone has a say (Image: Dilbert, 1 July 2010) (View larger version22)

Executive teams and individual contributors have to buy into this role. If they don’t, then the product manager will become impotent and a frustrated bystander to a process that continues to spiral out of control. And they’ll end up going somewhere where their value is appreciated.

What Now?

We’ve covered a bunch of what might be considered “soft issues” in product management: what product managers are like, how they work with other people, what differentiates a good one from a bad one. It’s tempting to skim over these issues to get to the how — the processes and day-to-day activities of the role. But that would be a mistake. I haven’t seen a role in product development that relies more on these soft skills than that of the product manager. A product manager could have the best strategy and could execute brilliantly, but if they’re not able to work well with people and rally them around a cause, they will fail. So, if you’ve skipped over any of those sections, consider going back and reading them.

We can’t stop here, of course. Now that the foundation is in place, it’s time to move on to how product managers spend their day. If you’d like to read more about product planning (how to prepare for and prioritize product changes) and product execution (how to ship those changes), then check out my book Making It Right23!


  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12 //
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18 //
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23

The post Why Companies Need Full-Time Product Managers (And What They Do All Day) appeared first on Smashing Magazine.

Essential Visual Feedback Tools For Web Designers

Tue, 09/16/2014 - 12:27

The creative process takes a lot of time, and web designers know it. When you factor in feedback from clients, the process takes even longer: numerous emails, revision notes, chats and meetings — that’s what it normally takes to find out precisely what the client wants.

Fortunately, today’s web provides various solutions to optimize the communication process. The first web services that allow users to report bugs on web pages appeared several years ago. Since then, tools and technologies have emerged to make the process more convenient and user-friendly. Today’s market offers several useful products for visual bug-tracking, each with its pros and cons.

We have selected our top five tools and compared their features, functionality and pricing. We hope that this review will simplify your workflow and speed up communication between your team and clients.
InVision LiveCapture

Invision1 is a prototyping tool that enables designers to easily collaborate on design projects and to showcase their work. Quite recently, the company announced the release of a new tool, InVision LiveCapture192. It grabs full-length snapshots of live websites and helps users provide feedback during the design process.

3InVision enables designers to collaborate and to showcase their work. (View large version4) User Experience

Getting started with InVision is easy. Just add your first project and install the extension for Google Chrome. To start commenting on a web page, click on the extension icon and select the project.

InVision LiveCapture extension.

When someone adds a comment, feedback appears on the normal screen in InVision, which would already be familiar to the designer.

InVision LiveCapture extension.

The service is targeted at web designers who are looking for an easy way to get feedback on their work in progress. Unfortunately, it cannot be used to comment on a live web page or integrated in a large team’s workflow or used as a bug-tracking tool. On the other hand, it’s free!

  • Easy to work with
  • Great for collecting inspirational ideas
  • Ideal for existing customers looking to get more out of Invision
  • Works with Google Chrome only
  • Browser extension required
  • Reporting bugs is difficult
Third-Party Integration
  • InVision
  • Doesn’t support third-party project-management or bug-tracking systems
Supported Browsers
  • Chrome

The free plan is available for individual use. A subscription plan for teams with up to five members and unlimited projects costs $100 per month.


TrackDuck5 is a visual feedback tool that is ideal for web designers and developers. Users can highlight issues on the page, comment and receive feedback in real time. Bug reports are sent directly to the dashboard, where you can chat with others, assign tasks and track progress.

With TrackDuck users can highlight issues on the page, comment and receive feedback in real time. User Experience

Immediately after you register, which takes about a minute, a simple wizard helps you to get started with your first project in three simple steps: enter a website address, install the extension for capturing high-quality screenshots, and invite colleagues to collaborate on the project. The website that you add then opens in a new tab, and you can start commenting right on the web page by selecting and dragging the mouse wherever you notice a bug. You can appoint a colleague to fix the issue and assign a priority to the task.

TrackDuck visual feedback 6It works even for responsive websites, and you can always check what a bug looks like on your smartphone. (View large version7) All tasks and bug reports are available in the dashboard. Issues are also visible as pins on the web page itself.

TrackDuck currently integrates with only a couple of systems: JIRA and Basecamp. The integration works both ways — tasks are sent and updated automatically in both systems, no double entry required.

  • Unlimited number of team members
  • Cross-browser
  • Technical details are collected automatically
  • Real time with WebSockets
  • No extensions needed
  • Anonymous feedback allowed
  • Extension required to capture high-quality screenshots
  • Few systems to integrate with
Third-Party Integration
  • Basecamp (both ways)
  • JIRA (both ways)
  • WordPress plugin
  • Modx plugin
Supported Browsers
  • Chrome
  • Internet Explorer
  • Safari
  • Firefox
  • bookmarklet (for other browsers)

You can get a free 14-day trial of the fully functional version. The free version allows for one project and unlimited contributors. Paid subscription plans start at $19 per month. Custom enterprise plans are also available.


Bugmuncher8 is a handy web application allows you to highlight problems on a website. Users can make notes on specific areas of a website where they notice issues. BugMuncher will automatically take a screenshot of the area where the highlight was made and send it to you.

9BugMuncher allows you to highlight problems on a website. User Experience

Installing and getting started with BugMuncher isn’t complicated. All you have to do is embed the code on your website. Anyone who has ever set up Google Analytics could handle it. There is no onboarding tour, which could be a problem for inexperienced users. Also, when you embed the code on your website, a small shortcut appears for all visitors to the page, allowing them to highlight or (if they need to hide personal data) to black out certain parts of the page. However, you cannot add separate text comments to the web page.

All tasks and bug reports are available in the dashboard. Issues are also visible as pins on the web page itself.

When testing this tool, I couldn’t scroll down the page and was confined to working within the visible area, which is a bit odd.

After you have highlighted an area of the page, you will be prompted to enter your email address, describe the bugs you’ve found and submit them. You can access all reports, configure email alerts and set up integration in the dashboard.

  • Cross-browser
  • Automatically attaches technical details
  • No extension needed
  • Allows for anonymous feedback
  • Not real time
  • Cannot select a particular area of a page
  • No way to comment on dynamic elements on the page
Third-Party Integration
  • GitHub
  • activeCollab
  • Bitbucket
  • Trello
  • Zendesk
Supported Browsers
  • Chrome
  • FireFox
  • Safari

A free 30-day trial is available. After that, pricing starts at $19 per month for a personal subscription with one account. If your team has five members, you would pay $199 per month.


BugHerd10 is a simple bug-tracking and collaboration tool for web designers and their clients. It allows users to flag issues on web pages using a simple point-and-click interface and to submit reports of problems.

11BugHerd allows users to flag issues using a simple point-and-click interface and to submit reports of problems. (View large version12) User Experience

You don’t need a credit card to sign up. Registration is pretty simple: You can try the tool by installing a browser extension or by adding JavaScript code to your website. Once you’ve done one of these two things, you’ll be able to add websites to work with. Don’t forget to include the http:// or https://, or else it won’t work.

After completing the short onboarding process, you’ll be all set and can pin issues and bugs directly to web pages. You can’t highlight random areas of the page; the tool is tied to DOM elements — a useful but questionable solution. We particularly had problems selecting big areas of the page.

While not a major issue, the indicator can be hard to notice on light and gray backgrounds, so you might have to refer to the task list to find it sometimes.

Another drawback is the size and location of the side panel, which occupies a big part of the page, hiding your pins and most of the page.

The side panel is a bit too big.

BugHerd offers quite a lot of integration (we tried Basecamp and JIRA). Unfortunately, integration seems to work only one way for now — tasks created in Bugherd are sent to Basecamp, but if you update them directly in Basecamp, you won’t be notified of the changes in Bugherd.

13Bugherd’s toolbar. (View large version14)

Overall, the product is very good. The UX is questionable in places — as mentioned, the side panel is just too big. Prices are a little steep, too; prepare to pay almost $100 for a team of five.

  • Highlight bugs directly on the web page
  • Anonymous comments allowed
  • Screenshot automatically sent with every bug report
  • A lot of third-party integration
  • Sidebar is too big
  • Integration with third-party systems is one way
  • Quite expensive
  • Extension required to capture screenshots
  • Expensive for teams
Third-Party Integration
  • JIRA
  • Basecamp
  • GitHub
  • Redmine
  • Zendesk
  • Pivotal Tracker
Supported Browsers
  • Chrome
  • FireFox
  • Safari
  • Internet Explorer

Pricing starts at $19 per month. A short free trial is available, after which you’ll have to pay for each user on the account.


Notable2315 is a web-based application for sharing feedback on images, mockups and live website designs. The user takes a screenshot of any interface, draws a box around the area they want to comment on and then types in their feedback.

16Notable is a web-based application for sharing feedback on images, mockups and live website designs. User Experience

You start the registration process by entering your payment details, even if you’ve selected a trial account. If you want to skip this step, then you just have to watch a demo video of the service on YouTube and then install the required extension. Then, when you click the extension’s icon, the app automatically takes a screenshot and uploads it to the server. After the upload, you are redirected to the saved screenshot, where you can highlight any area and add comments.

17Notable in action. (View large version18)

It also saves HTML and CSS from the page, including meta tags, in text format. This separation of code, styles and images seems less useful than BugHerd and TrackDuck’s method, but it might appeal to some users.

  • Export custom PDFs
  • Unlimited team members
  • Ability to comment on source code
  • Credit card is required
  • Browser extension is required
  • Cannot mark up dynamic elements of web pages
Third-Party Integration
  • No information found
Supported Browsers
  • Chrome
  • Internet Explorer
  • Safari
  • Firefox
  • bookmarklet (for other browsers)

Pricing starts at $19 per month, and a 30-day trial is available. A credit card is required for all plans.

Our Choice

Before visual feedback tools, it was difficult to imagine a project manager’s daily workflow without the endless emails and chats between designers and developers. Email was the primary means of communication, and it felt like a big waste of time.

A colleague of ours introduced us to BugHerd, which is an awesome tool for collaborating on web projects, and we started using it. Later, we switched to TrackDuck for a few reasons. First, the service is relatively new and takes advantage of modern web technology. It also offers the same functionality but is significantly more affordable for medium and large teams. In addition, we use Basecamp to manage projects, and the two apps integrate nicely. As a bonus, TrackDuck offers two-way integration, with updates being sent to both systems automatically.


Visual-feedback and bug-tracking services are becoming ever more popular, and integrating one of them into their workflow would simplify the communication process of any web developer. Taking the five that we’ve identified and that we think are the most useful, we’ve laid out the advantages of each in the table below to help you determine which is right for you.

Service Advantages Pricing InVision LiveCapture192
  • Easy to work with
  • Great for collecting inspirational ideas
  • Ideal for existing customers looking to get more out of Invision
  • Free for InVision users
  • Unlimited number of team members
  • Cross-browser
  • Technical details are collected automatically
  • Real time with WebSockets
  • No extension needed
  • Anonymous feedback allowed
  • $0 – $49
  • 14-day trial
  • Free plan forever
  • Export custom PDFs
  • Unlimited team members
  • Ability to comment on source code
  • $19 – $199
  • 30-day trial
  • Highlight problems directly on web page
  • Anonymous comments allowed
  • Screenshot sent automatically with each bug report
  • A lot of third-party integration
  • $29 – $180
  • 14-day trial
  • Export custom PDFs
  • Unlimited team members
  • Ability to comment on source code
  • $19 — $99
  • 30-day trial
  • Credit card required

If you have experience using visual feedback services, please let us know in the comments below.

(al, ml)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23

The post Essential Visual Feedback Tools For Web Designers appeared first on Smashing Magazine.

Making Modal Windows Better For Everyone

Mon, 09/15/2014 - 11:23

To you, modal windows1 might be a blessing of additional screen real estate, providing a way to deliver contextual information, notifications and other actions relevant to the current screen. On the other hand, modals might feel like a hack that you’ve been forced to commit in order to cram extra content on the screen. These are the extreme ends of the spectrum, and users are caught in the middle. Depending on how a user browses the Internet, modal windows can be downright confusing.

Modals quickly shift visual focus from one part of a website or application to another area of (hopefully related) content. The action is usually not jarring if initiated by the user, but it can be annoying and disorienting if it occurs automatically, as happens with the modal window’s evil cousins, the “nag screen” and the “interstitial.”

However, modals are merely a mild annoyance in the end, right? The user just has to click the “close” button, quickly skim some content or fill out a form to dismiss it.

Well, imagine that you had to navigate the web with a keyboard. Suppose that a modal window appeared on the screen, and you had very little context to know what it is and why it’s obscuring the content you’re trying to browse. Now you’re wondering, “How do I interact with this?” or “How do I get rid of it?” because your keyboard’s focus hasn’t automatically moved to the modal window.

This scenario is more common than it should be. And it’s fairly easy to solve, as long as we make our content accessible to all through sound usability practices.

For an example, I’ve set up a demo of an inaccessible modal window2 that appears on page load and that isn’t entirely semantic. First, interact with it using your mouse to see that it actually works. Then, try interacting with it using only your keyboard.

Better Semantics Lead To Better Usability And Accessibility

Usability and accessibility are lacking in many modal windows. Whether they’re used to provide additional actions or inputs for interaction with the page, to include more information about a particular section of content, or to provide notifications that can be easily dismissed, modals need to be easy for everyone to use.

To achieve this goal, first we must focus on the semantics of the modal’s markup. This might seem like a no-brainer, but the step is not always followed.

Suppose that a popular gaming website has a full-page modal overlay and has implemented a “close” button with the code below:

<div id="modal_overlay"> <div id="modal_close" onClick="modalClose()"> X </div> … </div>

This div element has no semantic meaning behind it. Sighted visitors will know that this is a “close” button because it looks like one. It has a hover state, so there is some visual indication that it can be interacted with.

But this element has no inherit semantic meaning to people who use a keyboard or screen reader.

There’s no default way to enable users to tab to a div without adding a tabindex attribute to it. However, we would also need to add a :focus state to visually indicate that it is the active element. That still doesn’t give screen readers enough information for users to discern the element’s meaning. An “X” is the only label here. While we can assume that people who use screen readers would know that the letter “X” means “close,” if it was a multiplication sign (using the HTML entity &times;) or a cross mark (&#x274c;), then some screen readers wouldn’t read it at all. We need a better fallback.

We can circumvent all of these issues simply by writing the correct, semantic markup for a button and by adding an ARIA label for screen readers:

<div id="modal_overlay"> <button type="button" class="btn-close" id="modal_close" aria-label="close"> X </button> </div>

By changing the div to a button, we’ve significantly improved the semantics of our “close” button. We’ve addressed the common expectation that the button can be tabbed to with a keyboard and appear focused, and we’ve provided context by adding the ARIA label for screen readers.

That’s just one example of how to make the markup of our modals more semantic, but we can do a lot more to create a useful and accessible experience.

Making Modals More Usable And Accessible

Semantic markup goes a long way to building a fully usable and accessible modal window, but still more CSS and JavaScript can take the experience to the next level.

Including Focus States

Provide a focus state! This obviously isn’t exclusive to modal windows; many elements lack a proper focus state in some form or another beyond the browser’s basic default one (which may or may not have been cleared by your CSS reset). At the very least, pair the focus state with the hover state you’ve already designed:

.btn:hover, .btn:focus { background: #f00; }

However, because focusing and hovering are different types of interaction, giving the focus state its own style makes sense.

.btn:hover { background: #f00; } :focus { box-shadow: 0 0 3px rgba(0,0,0,.75); }

Really, any item that can be focused should have a focus state. Keep that in mind if you’re extending the browser’s default dotted outline.

Saving Last Active Element

When a modal window loads, the element that the user last interacted with should be saved. That way, when the modal window closes and the user returns to where they were, the focus on that element will have been maintained. Think of it like a bookmark. Without it, when the user closes the modal, they would be sent back to the beginning of the document, left to find their place. Add the following to your modal’s opening and closing functions to save and reenable the user’s focus.

var lastFocus; function modalShow () { lastFocus = document.activeElement; } function modalClose () { lastFocus.focus(); // place focus on the saved element } Shifting Focus

When the modal loads, focus should shift from the last active element either to the modal window itself or to the first interactive element in the modal, such as an input element. This will make the modal more usable because sighted visitors won’t have to reach for their mouse to click on the first element, and keyboard users won’t have to tab through a bunch of DOM elements to get there.

var modal = document.getElementById('your-modal-id-here'); function modalShow () { modal.setAttribute('tabindex', '0'); modal.focus(); } Going Full Screen

If your modal takes over the full screen, then obscure the contents of the main document for both sighted users and screen reader users. Without this happening, a keyboard user could easily tab their way outside of the modal without realizing it, which could lead to them interacting with the main document before completing whatever the modal window is asking them to do.

Use the following JavaScript to confine the user’s focus to the modal window until it is dismissed:

function focusRestrict ( event ) { document.addEventListener('focus', function( event ) { if ( modalOpen && !modal.contains( ) ) { event.stopPropagation(); modal.focus(); } }, true); }

While we want to prevent users from tabbing through the rest of the document while a modal is open, we don’t want to prevent them from accessing the browser’s chrome (after all, sighted users wouldn’t expect to be stuck in the browser’s tab while a modal window is open). The JavaScript above prevents tabbing to the document’s content outside of the modal window, instead bringing the user to the top of the modal.

If we also put the modal at the top of the DOM tree, as the first child of body, then hitting Shift + Tab would take the user out of the modal and into the browser’s chrome. If you’re not able to change the modal’s location in the DOM tree, then use the following JavaScript instead:

var m = document.getElementById('modal_window'), p = document.getElementById('page'); // Remember that <div id="page"> surrounds the whole document, // so aria-hidden="true" can be applied to it when the modal opens. function swap () { p.parentNode.insertBefore(m, p); } swap();

If you can’t move the modal in the DOM tree or reposition it with JavaScript, you still have other options for confining focus to the modal. You could keep track of the first and last focusable elements in the modal window. When the user reaches the last one and hits Tab, you could shift focus back to the top of the modal. (And you would do the opposite for Shift + Tab.)

A second option would be to create a list of all focusable nodes in the modal window and, upon the modal firing, allow for tabbing only through those nodes.

A third option would be to find all focusable nodes outside of the modal and set tabindex="-1" on them.

The problem with these first and second options is that they render the browser’s chrome inaccessible. If you must take this route, then adding a well-marked “close” button to the modal and supporting the Escape key are critical; without them, you will effectively trap keyboard users on the website.

The third option allows for tabbing within the modal and the browser’s chrome, but it comes with the performance cost of listing all focusable elements on the page and negating their ability to be focused. The cost might not be much on a small page, but on a page with many links and form elements, it can become quite a chore. Not to mention, when the modal closes, you would need to return all elements to their previous state.

Clearly, we have a lot to consider to enable users to effectively tab within a modal.


Finally, modals should be easy to dismiss. Standard alert() modal dialogs can be closed by hitting the Escape key, so following suit with our modal would be expected — and a convenience. If your modal has multiple focusable elements, allowing the user to just hit Escape is much better than forcing them to tab through content to get to the “close” button.

function modalClose ( e ) { if ( !e.keyCode |s| e.keyCode === 27 ) { // code to close modal goes here } } document.addEventListener('keydown', modalClose);

Moreover, closing a full-screen modal when the overlay is clicked is conventional. The exception is if you don’t want to close the modal until the user has performed an action.

Use the following JavaScript to close the modal when the user clicks on the overlay:

mOverlay.addEventListener('click', function( e ) if ( == modal.parentNode) modalClose( e ); } }, false); Additional Accessibility Steps

Beyond the usability steps covered above, ARIA roles, states and properties3 will add yet more hooks for assistive technologies. For some of these, nothing more is required than adding the corresponding attribute to your markup; for others, additional JavaScript is required to control an element’s state.


Use the aria-hidden attribute. By toggling the value true and false, the element and any of its children will be either hidden or visible to screen readers. However, as with all ARIA attributes, it carries no default style and, thus, will not be hidden from sighted users. To hide it, add the following CSS:

.modal-window[aria-hidden=”true”] { display: none; }

Notice that the selector is pretty specific here. The reason is that we might not want all elements with aria-hidden="true" to be hidden (as with our earlier example of the “X” for the “close” button).


Add role="dialog" to the element that contains the modal’s content. This tells assistive technologies that the content requires the user’s response or confirmation. Again, couple this with the JavaScript that shifts focus from the last active element in the document to the modal or to the first focusable element in the modal.

However, if the modal is more of an error or alert message that requires the user to input something before proceeding, then use role="alertdialog" instead. Again, set the focus on it automatically with JavaScript, and confine focus to the modal until action is taken.


Use the aria-label or aria-labelledby attribute along with role="dialog". If your modal window has a heading, you can use the aria-labelledby attribute to point to it by referencing the heading’s ID. If your modal doesn’t have a heading for some reason, then you can at least use the aria-label to provide a concise label about the element that screen readers can parse.

What About HTML5’s Dialog Element?

Chrome 37 beta and Firefox Nightly 34.0a1 support the dialog element, which provides extra semantic and accessibility information for modal windows. Once this native dialog element is established, we won’t need to apply role="dialog" to non-dialog elements. For now, even if you’re using a polyfill for the dialog element, also use role="dialog" so that screen readers know how to handle the element.

The exciting thing about this element is not only that it serves the semantic function of a dialog, but that it come with its own methods, which will replace the JavaScript and CSS that we currently need to write.

For instance, to display or dismiss a dialog, we’d write this base of JavaScript:

var modal = document.getElementById('myModal'), openModal = document.getElementById('btnOpen'), closeModal = document.getElementById('btnClose'); // to show our modal openModal.addEventListener( 'click', function( e ) {; // or modal.showModal(); }); // to close our modal closeModal.addEventListener( 'click', function( e ) { modal.close(); });

The show() method launches the dialog, while still allowing users to interact with other elements on the page. The showModal() method launches the dialog and prevents users from interacting with anything but the modal while it’s open.

The dialog element also has the open property, set to true or false, which replaces aria-hidden. And it has its own ::backdrop pseudo-element, which enables us to style the modal when it is opened with the showModal() method.

There’s more to learn about the dialog element than what’s mentioned here. It might not be ready for prime time, but once it is, this semantic element will go a long way to helping us develop usable, accessible experiences.

Where To Go From Here?

Whether you use a jQuery plugin or a homegrown solution, step back and evaluate your modal’s overall usability and accessibility. As minor as modals are to the web overall, they are common enough that if we all tried to make them friendlier and more accessible, we’d make the web a better place.

I’ve prepared a demo of a modal window4 that implements all of the accessibility features covered in this article.

(hp, il, al, ml)

  1. 1
  2. 2
  3. 3
  4. 4

The post Making Modal Windows Better For Everyone appeared first on Smashing Magazine.

Are Your Internal Systems Damaging Your Business?

Fri, 09/12/2014 - 13:59

The internal systems of many organizations have shocking user interfaces. This costs companies in productivity, training and even the customer experience.

Fortunately, we can fix this.

“How come I can download an app on my phone and instantly know how to use it, yet need training to use our content management system? Shouldn’t our system be intuitive?”

This was just one of the comments I heard in a recent stakeholder interview. People are fed up with inadequate internal systems. Many of those I interviewed had given up on the official software. Instead, they use tools like Dropbox, Google Docs and Evernote.

The problem seems to exist across the board. I am hearing the same thing from employees across many companies and sectors. I am also hearing it about almost all types of internal systems, from ones for customer relationship management (CRM) to ones for procurement. They are all painful to use.

Frustration will only increase as millennials enter the workforce. These people are digital natives, and they expect a certain standard of software. They expect software to adapt to them, not the other way around.

The result of this frustration is that employees are abandoning these systems. People use email instead of a CRM and put documents in Dropbox rather than on the intranet. This leads to systems being out of date and, thus, irrelevant to the organization.

How have things gotten to this state? Why is enterprise software so bad?

One Size Does Not Fit All

I think technology is often oversold. “A content management system is the solution to content!” “An intranet is the answer to improving efficiency!” “A CRM system will manage the customer relationship!” But that is just not true.

Unfortunately, in the eyes of senior management, once a piece of software is purchased, the problem is solved. Job done, move on to the next challenge.

One size rarely fits all. Organizations rarely work in the same way, even within the same sector. Even if a law firm purchases an intranet designed for the legal sector, the system won’t necessarily work well out of the box for that firm.

People work in different ways. The functionality required by the secretary to the CEO will be different from the functionality required by someone in accounting or HR. Yet many enterprise systems do nothing to streamline the experience for different groups. (Image source: opensourceway1)

Many of these systems could be tailored to the needs of individual organizations or employees, but they are not so out of the box. They need to be configured and optimized, which usually does not happen — or else the wrong system is purchased to begin with.

There must be a better way.

Starting With Users’ Needs

The procurement process for these systems too often begins with a list of desired features. This is the wrong starting point. We should approach internal software in the same way that we develop external applications: starting with users’ needs.

Regardless of whether you already have a system in place, identify your different user groups. Who will be using each system? Once you know that, shadow them for a while. Understand how they work. What do they do each day, and what system do they already use to get their work done?

Look for pain points in that system, and talk to them about where they get frustrated. Identify the information they need to do their job, and be aware of any clutter that gets in the way.

Finally, identify your users’ top tasks. Which tasks do your different user groups do again and again. These need to be super-accessible.

You might think that you now have enough information to buy a system. But just because something has the functionality you need does not mean it is easy to use. Before leaping for expensive software, design the user experience. We can do that with some simple prototyping.

Prototype Your Perfect System

Creating a prototype of how your ideal system would work does not need to be time-consuming or particularly expensive. Best of all, it could replace a long-winded and abstract specification of functions.

Using nothing but a bit of HTML, CSS and JavaScript, we can build a working prototype that can be tested with real users. Does this prototype match their workflow? Does it give them quick access to key tasks? Does it accommodate the differences between groups? Which parts of the prototype are having the biggest impact on productivity, and which are just nice to have?

We can iterate on the prototype based on user feedback until it offers the optimal experience.

With that vision in place, you can compromise intelligently.

Informed Compromise

A working prototype is a good standard by which to measure different software — much better than a specification.

Could your existing system be set up to mirror the prototype? If it can’t exactly, then which areas would you have to compromise on? Based on your user testing, are these compromises acceptable?

If your existing system cannot replicate the key functionality of the prototype, look at alternatives. Talk to other vendors and show them your prototype. Ask whether their system can replicate it, and once again, decide on areas of compromise based on user feedback.

Do you see the difference here? The experience is designed around the user, not around what the software provides. Also, if you cannot find software that meets the needs of your users, consider building a bespoke system.

Buying software off the shelf makes no sense if no one will use it or if it provides no business value. (Image source: opensourceway2)

I know what you’re thinking. This makes sense, but senior management won’t go for it. They won’t pay for a prototype or a bespoke system. Well, that depends on how you sell it.

Selling The Need For A User-Centric System

Convincing management to spend money on a prototype can be hard. It’s hard enough when a clever salesperson says that their software will solve all of the company’s problems — harder still if management has already paid for a fancy system. Nevertheless, solid business arguments can be made for this approach.

If your company has a system that is not fit for its purpose, you should be able to prove this. Collect data on how users interact with the system. Combine this with user testing and stakeholder interviews. This should be enough to establish a compelling case — at least compelling enough to justify some limited prototyping of an alternative approach.

Remember that you are not asking them to replace the system. You just want to prototype a potentially better solution and see whether the current software could be set up to match it. When managers see a better way, they will usually be open to change.

If the company does not already own a system, then your position is even stronger. Enterprise software is expensive, and so ensuring the right fit is important. Getting it wrong could mean hundreds or thousands of dollars wasted. A prototype will prove more effective than a specification at measuring the suitability of different products. It will also make it easier to compare software.

Of course, management could take the position that employees will just need to get used to what they have. This argument has some merit. Given time, users would adapt to even the most archaic of systems. But at what cost?

The Cost Of Failure

Poor user interfaces require more training and support. Both are a cost borne by the organization, not to mention the frustration it causes. Even more significant is the cost in lost productivity. Organizations are keen to maximize efficiency, and systems that are easy to use go a long way towards this.

Unfortunately, some managers seem to care little about internal processes. But they do care about customer satisfaction, which is becoming one of the most popular factors for organizations to measure. We now live in a world of consumers who are connected and have a voice through social media. That makes organizations sensitive to negative comments and experiences.

Internal systems weigh heavily on the performance of your employees. And they have a massive impact on the customer experience. These systems ensure timely responses; they help deliver the product; and they facilitate customer relationships. This is why internal systems are becoming the next big competitive advantage.

Are you struggling to implement an effective digital strategy? Do you need help bringing about a digital transformation? The Digital Adaptation4 book can help you prepare and adapt your company for the new digital landscape, and teach you everything you need to know. — Ed.

(al, il)

  1. 1
  2. 2
  3. 3
  4. 4

The post Are Your Internal Systems Damaging Your Business? appeared first on Smashing Magazine.

Dropbox’s Carousel Design Deconstructed (Part 2)

Thu, 09/11/2014 - 13:34

Many of today’s hottest technology companies, both large and small, are increasingly using the concept of the minimum viable product (MVP) as way to iteratively learn about their customers and develop their product ideas. This two-part series, looks into the product design process of Dropbox’s Carousel.

Part 11 covered the core user, their needs and Dropbox’s business needs, and broke down existing photo and video apps. This second part is about Carousel’s primary requirements, the end product, its performance and key learnings since the launch.

Primary User Requirements

In a Wired article2 covering Carousel’s launch, Gentry Underwood, CEO and cofounder of Mailbox (which was acquired by Dropbox) and lead designer of Carousel, detailed some of the key requirements that his team prioritized.

Below is a list of some of them, as well as some requirements highlighted in media coverage of Carousel’s launch3 and from our evaluation of existing products and design patterns in part 1.

Back up all photos and videos

The app has to save not only the photos that users want to see in the gallery, but also ones they don’t want to see yet but might want to at a later date. Not to mention, this takes up more storage, which is ideal for Dropbox’s business. Most photo apps allow you only to delete photos, not hide them. “It’s a 100% destructive thing,” Underwood says. And the permanence of deleting photos requires a heavy two-step process of hitting the trash button and confirming the action. Underwood claims that this leads to users not deleting media and, ultimately, to sloppy media galleries with misfires, blurry selfies and many imperfect versions of the same shot.

Display all photos and videos

According to Underwood, another big problem with media gallery apps is that they seem to start from the last time you bought a smartphone. This is especially true for stock apps like Apple’s Photos. However, even with photo stream and other apps that sync a portion of your photos locally while saving the rest in the cloud, users can never see their entire media history — they have to go to their computer or the web for that.

Show the best photos and videos

The most obvious solution for this is to make it easy to manually hide undesired media, presumably with some quick swiping action. However, the app could also surface media that users would most likely want to see, like ones with faces or, more importantly, smiling faces. Beyond finding the best media, the app could also highlight one or more thumbnails of media that seem most interesting.

Enable quick navigation

Media should be automatically sorted in events based on common attributes such as time and location. The groupings should also show just a sample of the photos from that event in order to save space while navigating through a long list. Finally, users should have multiple ways to scroll through media (for example, slowly or quickly).

Feel native

Making it seem like everything is stored locally would set this application apart from the competition. After all, that snappy feeling is what makes Apple’s Photos more appealing than Facebook, Flickr, Instagram, Dropbox and the like. Among other things, fine-tuning the caching and other back-end tricks could help dramatically. But some clever perceptual tricks could also be done. For example, multiple thumbnails of each media file could be saved at various resolutions and be dynamically deployed based on how fast the user is scrolling through the gallery. Faster scrolling would trigger lower-resolution thumbnails so that they load instantly and make the app feel native. Moreover, adding, moving, changing and deleting media files from Carousel or Dropbox should happen lightning-fast.

Enable public and private sharing

Users should be able to share videos and photos with others easily without having to use platforms with storage limitations, such as email. Also, they should be able to easily select between public sharing (i.e. on social networks) and private sharing through email, SMS and private in-app chat. “Carousel’s sharing tools can be utilized through any email address or phone number, whether the recipient has a Dropbox account or not,” says Underwood.

Enable public and private discussion

Although in-app discussion is an option when media is shared privately, as mentioned above, it’s not necessary. However, allowing for focused discussion on a set of photos — particularly after an event, when users want to congregate and compare photos — can be valuable. As an alternative to Facebook Messenger, SMS and email, where many other conversations go on, offering a dedicated set of chat threads for users’ personal media and nothing else would be beneficial. It would also be a great way to acquire new users for Dropbox.

What Do Users Get?

Basically, users get a camera roll for Dropbox. As Federico Viticci from MacStories eloquently puts it4, the app is a clean and imaginative “alternative Camera Roll and Photo Stream based on Dropbox storage with built-in sharing for individual or group conversations.”

Carousel’s MVP is effectively two things for most users: a Dropbox uploader for backing up local photos and videos, and an enhanced version of Apple’s native Photos app, with improved viewing, sharing and discussion functionality. The app doesn’t let users take, edit or manage photos, other than hiding them (or deleting them, if they can find that feature), or view in anything other than chronological order.

For now, if users want to take and edit photos, then their mobile camera, Instagram or Camera+ are great options. To organize photos into folders, they’ll need to use Dropbox directly. And to view them in anything other than chronological order, they would sync Dropbox with a more advanced media gallery such as iPhoto, Picasa or Unbound. You will understand Carousel’s MVP much more easily by testing it out than by listening to me explain it ad nauseam. Below are four screenshots of what you can expect. To help you along, MacStories thoughtfully runs through5 what you can expect in your first experience.

Carousel mobile app (View large version7) Results And Learning

Mills Baker, a product design analyst at Mokriya, paints a rather dismal picture of Carousel in “Designer Duds: Losing Our Seat at the Table8”:

It’s honestly hard to determine what should be interesting about it, even in theory; it takes mostly standard approaches to organization, display, and sharing, and seems to do little to distinguish itself from the default iOS Photos app + iCloud Photo Sharing, let alone apps and services like Instagram, VSCO Cam, Snapchat, iMessage, Facebook Messenger, and so on.

To get an idea of why Mills feels so strongly about Carousel’s shortcomings, let’s look at the results since its launch.


Since topping the charts on and around launch day, Carousel has steadily lost attention, now ranking 456th in the “Photo and Video” category of Apple’s App Store and falling off in the overall rankings across all categories. It has basically been buried in the crowded photo and video app market, and Dropbox will need to make some non-trivial changes in subsequent iterations to make it bounce back to the top.

9Carousel’s ranking has steadily declined since launch. (Image: AppAnnie25211310) (View large version11)

Upon launching in April 2014, the app certainly didn’t increase downloads of Dropbox’s main app, suggesting that Carousel’s main impact was on revenue or engagement, if anything.

Dropbox’s ranking hasn’t been affected by Carousel. (Image: AppAnnie25211310) (View large version14) Downloads

As of 16 July 2014, Carousel appears to have been downloaded 174,000 times15 globally. If Dropbox currently has 300 million users16, then it has managed to get a paltry .06% of its total customer base to adopt Carousel. Clearly, it needs to make some improvements to increase adoption.

Carousel downloads are sufficient for testing the app, not for claiming mass adoption. (Image: XYO18) (View large version19) Ratings

If we look at reviews in Apple’s App Store, non-target users almost unanimously consider Carousel to be a failure. “Not what I wanted,” “MASSIVE oversight” and “Completely useless” smatter the reviews section — all valid complaints if people are using it professionally. Meanwhile, average consumers like Owen and Nora have mixed reviews, ranging from “Amazing app!!!! This app is the best way to back up and privately share my photos on iOS!!!” to “Bring back Loom! Complete downgrade from Loom… Sad.”

While it wasn’t a runaway success upon launch, Carousel drew user reviews in the US and internationally that at least skew favorably. In fact, the reviews are as good as the ones for Dropbox and even Mailbox, both excellent standards for any productivity app.

Reviews for the first version of Carousel globally (left) and in the US (right). (Image: AppAnnie25211310) (View large version22)

While these reviews make it difficult to dispute Baker about the lackluster adoption of Carousel upon launch, 174,000 downloads is more than enough to learn about how people use Carousel, what needs to be improved and how well various features help Dropbox achieve its business goals.

In-App Purchases

While these statistics are highly coveted and, thus, kept very private, you can at least generalize that Carousel is achieving its primary objective of upselling existing users to premium accounts. However, many details suggest that Dropbox will need to make some major improvements to scale downloads and usage for new and existing users.

With Dropbox charging roughly ten times the price of Google’s popular cloud storage alternative, Drive, it will be interesting to see how much price has stunted adoption of and engagement with Carousel. If we’re being realistic, anyone who wasn’t born yesterday would know that Carousel requires Dropbox’s Pro Plan23 to work reliably. Dropbox will certainly have to address this as companies continue to compete on price in building up their cloud-based apps on top of virtually free cloud storage.

Top in-app purchases for Carousel (left) and Dropbox (right). (Image: AppAnnie25211310) (View large version26) Looking Forward To The Next Iteration

As expected, this MVP is far from being a full photo and video manager. It lacks some features that hold users back from adopting it, including:

  • better meta data and a better organizational structure for navigating photos,
  • more granular syncing options to reduce clutter,
  • a web viewer for making sharing easier,
  • lower pricing.

However, if you look closely at everything Dropbox has done with Carousel, it has been extremely disciplined in prioritizing many of the most important features for its business and its users. It has drawn from the best relevant design patterns that I could find, many of which are not to be found in the closest alternatives, including Apple’s Photos and Instagram Direct. And while most mobile photos galleries aren’t that complex, Dropbox has managed to edit out features that are less important, such as a camera, editing features, heavy organization options, chat outside of sharing, and friend lists.

It still has a ton of work to do on the web and mobile. Considering how much people wish Loom was still around, many of its features will probably be included. Additionally, well-designed and robust apps such as Picturelife offer a great deal of inspiration for a dramatically simplified alternative to Carousel.

And while Dropbox might have done better to wait for what Rand Fishkin, cofounder of Moz, calls an EVP27 — an exceptional viable product — Carousel has a promising future. Dropbox just needs to tweak what it’s got to get more people to download and use the app.

(al, ml)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27

The post Dropbox’s Carousel Design Deconstructed (Part 2) appeared first on Smashing Magazine.

Creating Clickthrough Prototypes With Blueprint

Wed, 09/10/2014 - 13:10

In a previous article1, I discussed using POP2 to create sketch-based clickthrough prototypes in participatory design exercises. These prototypes capture well the flow and overall layout of early design alternatives.

The same piece briefly mentioned another category of clickthrough prototypes: widget-based mockups that are designed on the target device and that expand on sketches by introducing user interface (UI) details and increased visual fidelity. These prototypes can be used to pitch ideas to clients, document interactions and even test usability. In this article, I will teach you how to use the iPad app Blueprint to put together such prototypes in the form of concept demos, which help to manage a client’s expectations when you are aligning your visions of a product.

Given the software’s rich feature set, I will cover the most useful functionality that helps designers get up and running with the tool. To make the exercise more realistic, I will reuse the earlier POP scenario. A link to the Blueprint work file is shared in the “Resources” section at the bottom. An exciting exploration lies ahead of us, so let’s jump right in!

The completed Blueprint project shows the complexity of widget-based clickthrough prototypes. (View large version4)

Where Do Concept Prototypes Fit In The Design Process? Clients as Knowledge Ambassadors

Stakeholders and user experience designers (UXers) come to participatory design meetings with experience from other projects, research knowledge and background information. Many clients bring a fresh perspective by offering specific ideas, but some need extra help from designers to translate business requirements into usable experiences. To do so, designers need to learn about the client’s domain. This is where stakeholders’ expertise becomes an invaluable resource for clarifying problematic aspects of the product.

Stakeholders and designers come together to make the end product even better.

Stakeholders will also gain insight into the design process and will feel that their voices are being heard. Regardless of their level of enthusiasm or participation, however, many will be ill-equipped to understand the activity’s outcomes: How many times have you been met with blank stares when explaining flow diagrams and UI layouts?

Don’t Just Explain: Tell a Story

To more clearly communicate the design direction of a project, you must revisit the communication from the perspective of a stakeholder: UXers must show how a concept will solve users’ problems and how it will do so by distinguishing itself from the competition through unique selling points. Storytelling, a very inclusive activity, welcomes the audience to the experience intended for end users.

Stories help an audience enter new worlds.

One approach is to use widget-based clickthrough prototypes to show off various user scenarios in a story: The interactivity of the design concepts paint a picture of the envisioned product. You can reinforce the narrative with highly polished prototypes, but their visual fidelity should be driven by the motivation of your audience and the goals of the project. Just as storytellers did around campfires, use visual aids: Present the prototypes on the target device for greater impact. The demo might be interrupted by feedback from stakeholders, but remember that you are there to present a vision, not a complete product, so unsolicited input won’t affect the throwaway concept.

Building Concept Prototypes Underlying Consideration

Personal experience has shown me that prototyping several key concept scenarios, which takes 45 minutes to 1 hour to walk through, leave the most lasting memory. If your delivery is longer, the audience will get uneasy. Show off a single flow with high interactivity or several more linear flows, but be cognizant of what you want to accomplish with the demo: Sometimes it is to sell the work, and other times to guide the team to the next design step.

Presenting an abundance of detail in a short period of time will block your participants’ understanding. Be brief!

Because concept prototypes communicate design vision, they must be of high fidelity. The fidelity, however, could become distracting: Stakeholders might grab onto less important details, such as a button’s color, rather than pay attention to the overall design. Avoid this situation by clearly setting their expectations for the demo up front: This is not a design critique. Rather, you are there to present a vision of the product. This will help to get agreement on the direction before you move to a large-scale evolutionary prototype as the final deliverable.

Designing Directly on the Device

Building concept designs directly on the target device has many benefits:

  • simulates UX of target device (for example, the platform’s input methods and the OS’ metaphors);
  • prototype is usable in different environments;
  • facilitates audience involvement;
  • easy to reproduce demo on external computers (wireless mirroring, wired video duplication, etc.);
  • easier access to visual assets (Dropbox, device’s photo gallery, etc.).

But it also has some inherent limitations:

  • Creating large widget-based clickthrough prototypes is not cost-effective.
  • Screen covers and enclosures can affect screen sensitivity.
  • A stylus is needed to avoid the fat-finger problem.
  • Neck and back fatigue can set in (leverage a stand to help).
  • Light reflection could also be a problem.
How Blueprint Fits In The Picture

Blueprint, a $20 app by groosoft5, enables you to create iPhone and iPad clickthrough prototypes on an iPad. The tool’s quality is best illustrated by building UIs with the ready-made widgets and the event model. The prototypes can be demoed in the application or via a free companion tool, Blueprint Viewer. There is also Blueprint Lite, but it limits the user to two projects and no external projects. Blueprint requires no user account because prototypes are distributed as a .blueprint file, a PDF specification or a series of PNG images. See the “Exploring the Export Options146” section below for more.

Create the Project’s Container

You are assigned to create a mobile news website! After meeting the stakeholders, you put some initial ideas for pages on paper, including the portal, individual articles and other ideas. Having heard about Blueprint, this new prototyping tool, you want to give it a shot! I will guide you by covering its main functionality, which we will use to build an interactive prototype. For detailed coverage of its features, feel free to browse groosoft’s “Tutorials7” section.

As with other design tools, you must store your project somewhere. To do this in Blueprint, you start by creating a project container first for all of your screens. In the “Home Screen,” tap the “+” (third icon in bottom toolbar). This will bring up a menu with two options (not shown here) to create a new project or duplicate an existing one. Duplicating an existing project is handy if you need to create different versions of the same project or to leverage assets in an existing prototype.

Options for importing, exporting, converting and deleting projects also appear on this screen. (View large version9)

Your stakeholders have data from Google Analytics showing an iPhone user base of over 30%! Upon selecting “New Project,” you have to select an iPhone app, mobile website or iPad app in portrait and landscape orientation. Given the large iPhone population, let’s select the entry for mobile website!

The top-right tab toggles between prototyping for iOS 6 and 7. As a good practice, pick the latest version of iOS because the majority of users upgrade quickly. For our example, let’s pick iOS 6, which will enable you to play with the iOS Converter later (a $10 add-on that updates the look and feel of iOS 6 projects to match iOS 7).

iOS 7’s look and feel are applied by default, but you can switch to iOS 6. (View large version11)

Once you select the target device, you will see the project’s first screen. Before we discuss it, let’s backtrack to the “Home Screen” for a second. Because you’re a designer, the quality of your deliverable speaks to your professionalism; therefore, you will need to provide details about the project. Tapping the “i” icon in the bottom-right corner reveals fields to capture the project’s title, a description, the author’s name and other information. It’s easy to forget to enter this data later on, so do this before you start to prototype. Tap the white checkmark to save.

Here you can also adjust the phone’s aspect ratio by selecting “iPhone 4” or iPhone 4 / 5.” (View large version13)

Import an Existing Project

As a first time user, you have no reusable assets. However, a colleague who is experienced with Blueprint has offered to share their .blueprint files with you. To view these files, you will need to import them. Go to the “Home Screen” and tap on the second icon: You can import .blueprint files from iTunes or Dropbox.

You must grant Blueprint access to your Dropbox account for this method. More on this is shared in the “Exploring the Export Options146” section near the end of this article. Once you’ve granted permission, you will find the folder containing your .blueprint files and, upon tapping a .blueprint file, it will be added to your list of projects. For more information on using Blueprint with iTunes, consult groosoft’s FAQ15 page. As long as your iOS mobile device is connected to the computer, you can drag a previously saved .blueprint file from the computer to Blueprint or to Blueprint Viewer in iTunes “Apps.”

Blueprint currently offers only two importing options. (View large version17)

Design Modes

You’re doing great! Let’s cover the design modes, which will help you to understand how Blueprint works. The app has two modes: the site map and the screen views. Both share functionality, including ways to center or view in full screen the workspace, to review the screen list and to take actions on items. You can toggle the views using the L13 icon in the “Local” toolbar (see the next section).

For most new projects, you will start in the screen view, with one working screen. At the screen level, you will be able to add widgets and cross-screen interactions. In contrast, the site map view enables you to organize screens spatially and to visualize the links (or paths) between screens in user flows.


Jumping to the screen design is easy, but a quick description of the controls will situate you in the application. Both views share the same toolbars, which have functionality specific to screens or their widgets. There are two of them: Global (named here “G”), which offers app-wide functionality, and Local (named “L”), which controls aspects of the current project. Throughout the prototype exercise, I will refer to entries from both toolbars. Notable entries in Global are “My Projects,” which returns you to “Home Screen” (G1), the self-explanatory “Add New Screen” (G3), and “Tools,” which contains help information and controls for on-screen guides (G5).

The controls are organized in two logical groupings, on either side of the project’s title. (View large version19)

The second toolbar lists tools associated with editing (some are disabled in site map view). A few less commonly used but practical features are the “Pen Tool” for drawing custom shapes in widgets (L1), the “Clock” to show the history of recently edited screens (L10), and the “Device Toggle” to change between iPhone 4S and 5 shells (L12).

The second tier of controls is organized in eight groups. (View large version21)

There is also the side panel, which is contextual. In the site map view, it shows screen links grouped by color (see the “Adding Interactivity22” section below). In the screen view, the panel updates with two tabs, “Widgets” and “Accessories.” The first displays configurable iOS components in the “Controls,” “Tables,” “Bars” and “Views” areas. The second shows annotation components, which are handy for capturing problems and questions.

Additionally, the side panel changes its contents according to whether the user is focused on a screen (in site map view) or a widget (in screen view). Two new tabs are revealed, “Properties” and “Actions.” The first includes configurable options for the selected screen or widget, while the second allows you to modify interactions. More on this in later sections!

Variations of the same side panel provide control of screens and individual widgets. (View large version24)

Using Images

Now that we have the project’s container, let’s create the individual screens and, lastly, layer on interactivity. Why do I say “create”? Are there no sketches from the earlier POP exercise? After meeting the stakeholders, you learn that you have additional time and so decide to improve the prototype by increasing its interactivity and visual fidelity (and not simply duplicate the POP sketches).

However, you can reuse the previously created images in Blueprint with a little trick. Paste the images into a background “View,” and layer transparent “Buttons” on top to link to other screens. This might be confusing now, but it will make sense when we build the first screen. Our prototype will have grayscale fidelity and (similar to POP) single-tap interactions. You can create colorful comp-like designs, and you are welcome to explore this as an exercise using the .blueprint file shared at the end of this article!

When using transparent “Buttons,” remember their placement because they will become hard to find. (View large version26)

Setting Up Your First Screen

Web experiences in Blueprint come with an initial empty screen, with the generic name “Screen.” For the experience you are designing, you will see a starting visitor portal. To navigate screens more easily, you’ll need to update their names to be human-readable. While in the site map view, under “Properties,” tap on “Title” and enter “1.0 Home.” To match the experience you are targeting, set how the status bar should be displayed by selecting “Black,” “White” or “None.” The “Start Screen” option is disabled because we have only one screen, but with multiple screens you can redefine the starting point.

Drag the workspace in the site map view to see all of its contents as you add more screens. (View large version28)

Populating the Screen With Widgets

You now know enough about Blueprint to start prototyping the concept! Let’s add some widgets to the screen. Double-tap the screen to enter the screen view. Add a background (the first entry in the “Views” collection) as the container for what’s to follow. This will give you control over the background color for the experience! Double-tap the widget in the panel or drag and drop it, thereby adding it to the screen.

A blue border highlights the container into which the widget is being dropped. (View large version30)

Don’t be confused by the updated side menu, which now shows the “Properties” and “Actions” tabs. What do I mean? We just dropped a “View” onto the “Screen.” You will notice that the side panel lists the options for “View,” then the options for “Screen.” This hierarchy helps when you’re deep-nesting widgets (for example, when building toolbars or list items).

Once you’ve dropped the widget onto the screen, you’ll see blue-dot handles to resize the widget. Tapping the widget again shows a flyout menu, with the options “Cut,” “Copy,” “Delete,” “Duplicate” and “Lock.” Your client is demanding, so these little features (especially “Select Subwidgets”) will help you to tackle changes. The folks behind Blueprint offer a shortcut to this via the gear icon, L9. For increased productivity, you can toggle between the “Properties” and “Actions” views by retapping a widget or the “i” icon, G4.

If you’re viewing the start screen of the prototype, its title will be in white. (View large version32)

I’ll let you in on a little secret! Backgrounds come with a hidden extra: When you resize a background, all child widgets get resized accordingly, thus saving you even more time and keeping your client happy! Earlier, I hinted at reusing sketch captures if you are pressed for time. To do this, just pick “Images” under “Properties” in the side menu. “Camera Roll” is one of the options, giving you access to all images on the device.

Under “Image” in the new window, you will find several preloaded icon libraries. (View large version34)

If you choose to show only the sketch on the screen, there is a way to hide the status bar. Turning it off is as easy as one, two, three: Go to “Properties” → “Screen” → “Status Bar” and voila! The sketch approach is limited because you are still locked to iOS’ resolution and aspect ratio. Use it wisely.

Blueprint has new popup screens for menu options. (View large version36)

You’ve just added your first widget, and now you want to create the remaining assets. Hold your horses, partner! Let’s first talk about a big problem. When you add closely positioned widgets, accurate placement becomes harder. Many a time I have tried to resize a widget only to end up moving it! Therefore, use a stylus to avoid the fat-finger problem. For extra precision, try the “Position” and “Size” adjustments under “Frame.”

“Position” and “Size” options are split into separate tabs. (View large version38)

Pinching in and out of “1.0 Home” will zoom in and out of details, a helpful technique to align widgets. Maintaining your workspace is a must in Blueprint: You can fit the screen within the workspace area again via L4, and you can reveal more vertical space by entering full screen via L5 (then double-tapping the screen to exit).

View the screen at maximum zoom by pinching out. (View large version40)

Awesome! You are fully armed to tackle the featured article’s hero space, the navigation toolbar and the month’s list of articles. First, change the color of the background to dark gray. Next — because we’ll want to show the client that rich imagery and meaningful titles will capture the visitor’s attention — drag an image widget, and resize it to fit the background widget, with gutter space of roughly 10 pixels. You can round the widget’s corners, or make it transparent if you are stacking layers of information. I will let you finish by adding the article’s title and the dot navigation. Refer to the .blueprint file for the completed widget!

The red line is a guideline, which was turned on via L4. (View large version42)

We anticipate a growing information architecture, so let’s select the side menu navigation pattern. To build the header toolbar that contains the hamburger toggle, drag another “View” and resize it to approximately 40 pixels in height. Then, drop a label widget on top of it for the title “News Site.” Next, add two text-free buttons with custom images. See whether you can figure out this part on your own. Here is a hint: Use the built-in icon libraries, and customize the images’ color to white!

After you’ve built the hero space and toolbar, you will want to move both of them at the same time via multi-selection. To add widgets with this method, tap and hold on the first widget and then, with another finger, tap on all following widgets. I recommend doing this with the tablet resting on a flat surface because you will need to use both hands. The tool has no alignment or distribution options — maybe in the next release!

You can change the alignment, style, font and many other options for each label. (View large version44)

The geek in me is tempted to cover the remaining steps here, but I will let you continue working on the project on your own. No fear! I am still here to guide you with best practices as you wrap up. For example, remember to duplicate reusable components, and expect to spent time building custom widgets. Blueprint offers many options, and you will keep finding useful functionality as you become comfortable with the tool (cool features are buried in submenus). Once you’ve dragged all widgets to the screen, you will end up on the final “1.0 Home” screen. Congratulations! You’ve just finished your first screen!

One screen down, a few more to go! (View large version46)

Maintaining Design Flexibility

You are ahead of schedule and want to finish the remaining screens, including the “Articles,” “Individual Article” and “Authors” views, but you notice that identifying a particular screen widget is hard. Blueprint nests widgets, and getting lost is easy if the widgets are of the same type. Before introducing any interaction and wowing your client, we should discuss features of the app that will make your design more flexible.

For example, the “Hierarchy” tool (L8) shows the nesting of a widget by using a top-down approach, with parent widgets shown on top. At each level, widgets are selectable and, when selected, are highlighted on the screen. You can also vary the depth of a widget via “Bring Forward” (L6) and “Send Backward” (L7). Lastly, it is worth reiterating L9, the gear icon and its “Select Subwidgets” option. This aids with copying multiple widgets without having to copy the parent. Phew, that was tiring!

Iterating Screens

Time is a wasting, and we don’t want to keep the client waiting. Let’s build the remaining screens already! Click the top “+” icon in the first toolbar (G3) to create a new screen or to duplicate the existing “1.0 Home” in either orientation. Unfortunately, Blueprint currently offer no master or template functionality. The duplication option will save you time because you won’t have to recreate the toolbar and background in other screens.

Duplicating an existing screen in landscape orientation rescales the screen’s widgets, as seen in the preview. (View large version48)

Designing clickthrough prototypes in Blueprint is iterative. You will complete one screen, then move on to the next, constantly switching between screen and site map view using the L13 icon. Upon finishing all screens, you will end up with a grid of thumbnails (as seen below) and with a sense of accomplishment for being closer to delivering a great presentation to the client. The screen’s layout may be modified to reflect the project’s flow(s). Making the design even more realistic is effortless: Toggle the device shell via the L2 icon for instantaneous results (in screen view only).

A grid layout helps with visualizing cross-screen links. (View large version50)

Adding Interactivity

The concept demo has many different paths and flows, so let’s add some interactivity. Key tasks include accessing an article from the home portal, accessing author information from anywhere on the website, and providing global access to the side menu navigation. With little time left, you decide to enable single-tapping with no transitions.

Double-tap “1.0 Home” to view all of its widgets. We want the user to be able to view an article’s details by tapping the list item. So, select the list item for the top article (a “View” widget). This will update the side menu to the “Properties” and “Actions” view. Select “Actions” and, under “Tap Action” (i.e. single-tap), tap on the target.

The actions available will vary from widget to widget. (View large version52)

This will bring up a screen with a site map view of all screens and existing clickthrough links. The triggering list item is highlighted in yellow in “1.0 Home,” identifying where the tap comes from. Here, you can assign a target for the interaction by tapping on a different screen, by tapping “New Screen” in the top left of the title toolbar or, if you’ve changed your mind, by tapping the “No Target” option. Go ahead and tap on “2.0 Article”!

The accordion menu on the right allows you to select a screen from the side panel. The top-right hamburger icon collapses it. (View large version54)

Your first interaction is almost complete. After you’ve selected a target, you will notice that “Link Style” and “Transition” are shown. The “Link Style” lets you choose the color and format of the link line (remember the site map’s side panel from earlier?). This helps with labeling inbound and outbound scenario paths.

Different colors and styles are available. (View large version56)

“Transition” allows you to select an effect for switching between screens. “Dissolve” is the default choice, and “None,” “Move,” “Reveal,” “Push,” “Curl” and “Flip” are also available. If you re-enter the “Target” screen, you will see both the UI trigger and the link arrow to the target screen highlighted in yellow. Select “None” for the transition. You can test other transitions later!

The list options are scrollable, revealing additional transitions. (View large version58)

Manipulating Links

The visibility of links can be modified both in the screen and site map views. Earlier, I covered how the side panel in the site map view can be used to hide and show groups of color links. In contrast, all links may be hidden in the screen view via L3. Surprisingly, links pointing to their parent screen are not shown at all. I hinted at using custom hotspots in the section on “Using Images59”: To do this, drop a “Button” and make it transparent via “Properties” → “Background.” Hide its border by setting the widget to “Custom” in “Properties” → “Type” to complete the look.

Playing the Prototype

Layering interactions is fun because it brings the prototype to life. After you’ve added the clickthrough interactions to the remaining screens, you will be ready to test the concept demo! Press the play icon (G6) to start the demo in full screen with the device shell shown. As seen below, a two-finger tap either exits the demo or backtracks in the flow. The app does not highlight active screen links, so know your prototype scenarios well.

Once the clickthrough demo is perfected, you can rock the audience! Presenting the concept directly on the device is best. Sometimes your viewership will be larger — in which case, to keep everyone engaged, you’ll want to mirror the tablet’s output either via software (such as Reflector60) or hardware (such as a Lightning-to-VGA adapter61).

The notification disappears shortly after the demo begins. (View large version63)

By default, “Undo Last Action” is unavailable. It activates as soon as you navigate to a new screen. (View large version65)

The presentation went off without a hitch! The demo clearly impressed the stakeholders because you received positive feedback and many relevant questions. Your client even asked to share the work with partners overseas. In this situation, you can prescribe the free Blueprint Viewer app and teach people how to load the sharable .blueprint source file. Let’s cover how to do that next!

The iPhone view is more compact. (View large version67)

Viewing the source file on an iPad allows you to browse projects while viewing the contents of the current project. (View large version69)

Exploring the Export Options

You are ready to share the work with the geographically distributed team. Luckily, unlike POP’s cloud-based distribution, Blueprint allows users to freely share assets via email, Dropbox and iTunes. To export a project, in the “Home Screen” of a project, tap the leftmost icon to access the options.

iTunes and Dropbox options are grouped separately. (View large version71)

The overseas team consists of several departments: marketing, development and product management. Your stakeholders ask to send the interactive demo to the product manager and developers, whereas the marketing representatives have requested large images. You can send marketing a PDF of all screens or a ZIP archive of individual PNG images. As for the developers, that’s easy! Just send them the .blueprint file.

If “Export to Dropbox” is selected, a “Sign Out” button will be added to the top right. (View large version73)

The PDF and PNG options are handy for capturing any documentation included in the file. However, this is rarely done for concept prototypes because of their fluid nature. Both have settings for adjusting the outputted deliverable.

You can adjust the paper size, number of screens per page and other options. (View large version75)

The only option available for PNG is selecting which screens to export. (View large version77)

Having learned about the exporting options, you are ready to send your work to the team. “Send via Mail” is the logical choice because the team is not familiar with Dropbox. Tap this option to create two messages, with the relevant deliverables attached in each draft. When you send the .blueprint file to the developers, a link to the free Blueprint Viewer is embedded in the message’s body. All you have to do is tell them to download the app!

The “Subject” field is populated with the project’s name. (View large version79)

In the future, you might be working with avid Dropbox users, so let’s cover that option as well! Blueprint will redirect you to the Dropbox application if you have it installed on your iPad. After you log in, Blueprint will ask for permission to access all folders and files. You must grant this in order to export files to Dropbox. Afterwards, you can select where in your Dropbox account to store the .blueprint file, the ZIP file of PNGs or the PDF.

Tapping “Allow” returns you to Blueprint. (View large version81)

Exporting to iTunes is best done for internal testing and for sharing with colleagues. Personally, I have used this feature also to back up .blueprint files on an external drive. For directions on how to use iTunes with Blueprint, read “How do I send a mockup to other people?” on groosoft’s FAQ82 page.

Related Solutions

You now have the basics down on how to put together a quick concept prototype with Blueprint. The tool can be used for larger, more complex prototypes, but these take much longer to create, and maintenance time must be factored in.

Other applications are on the market to help with creating iOS experiences directly on a tablet. Below, I’ll briefly discuss a couple of alternatives that I’ve come across! I am continuing my search for equivalent tools on other platforms, including Android.


AppCooker9183 is a $15 tool developed by Hot Apps Factory84. It lives in a similar ecosystem: The application is used to create experiences on an iPad that are viewable in AppCooker or the free AppTaster9285. The viewer is available for the iPhone and iPad, but tablet experiences are viewable on the iPad only.

AppCooker stands out with the following:

  • It includes an exporting option for Box, direct posting of projects to the viewer, and exporting to JPEG and PDF formats.
  • Each project includes features such as “Notepad” (to capture ideas), “Definition Statement” (to outline a project’s value proposition), “Revenue & Expenses Tracker” (to develop a distribution plan) and more.
  • The prototyping tool has support for taking multi-finger actions, customizing transitions, grouping widgets and specifying multiple link paths for a single hotspot.

Its advanced features make AppCooker a powerful tool for creating full app prototypes that illustrate complex interactions and that prove an app’s technical feasibility.

Interface HD

Interface HD9386, also known as Interface 3, is a $10 tool developed by LessCode87. Interface HD makes clickthrough prototypes for the iPad and iPhone. The app shares many of Blueprint’s features, but minor limitations exist. For example, the event model includes widget swipes and taps but no screen-level interactions (such as rotation); you have five transitions to choose from; and the software offers no way to visualize links between screens. The app is constantly being updated, so these features might be introduced soon!

Interface HD has many unique selling points:

  • It includes a “Stock Asset Manager” for downloading third-party icons and background patterns.
  • Password protection is built in, and web demonstrations require pin authentication.
  • The OS’ chrome can be customized in detail, including dynamic control of all aspects of the status bar (color, placement, icons, etc.).
  • Video tutorials are built in.

This tool is best suited for illustrating flows with light interaction and for designing UIs with customizable widgets. If you need to mock up a quick clickthrough prototype of high visual fidelity, then this is the tool for you!

Takeaways Pros of Blueprint
  • The collection of transitions and actions is rich.
  • Customizable widgets are included.
  • Blueprint Viewer allows anyone to view prototypes for free.
  • Viewing prototypes requires no Internet connectivity.
Cons of Blueprint
  • The learning curve is noticeable at first.
  • iOS is the only platform supported out of the box.
  • Multi-finger actions and third-party widgets are not supported (at least, not yet).
  • Dynamic states and master templates are not provided.
Closing Thoughts

Widget-based clickthrough prototypes are great for communicating design concepts that emerge from sketching exercises. These prototypes bridge the gap between what the stakeholders envision and what the UXers plan to create. Blueprint, one tool that handles the creation of such prototypes, provides an effective way to capture UI details, while allowing for higher visual fidelity. Furthermore, it introduces a way to design directly on the device, allowing stakeholders to be more intimately involved in demos. If you have $20 lying around, then spend a weekend exploring this tool. It could bring many benefits to your design process. Prototype away, fellow designers!

Useful Resources

Here are the iPad prototyping tools we’ve discussed:

(al, ml)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6 #export
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14 #export
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22 #interactivity
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59 #images
  60. 60
  61. 61
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
  86. 86
  87. 87
  88. 88
  89. 89
  90. 90
  91. 91
  92. 92
  93. 93

The post Creating Clickthrough Prototypes With Blueprint appeared first on Smashing Magazine.

Improving Smashing Magazine’s Performance: A Case Study

Mon, 09/08/2014 - 12:13

Today Smashing Magazine turns eight years old. Eight years is a long time on the web, yet for us it really doesn’t feel like a long journey at all. Things have changed, evolved and moved on, and we gratefully take on new challenges one at a time. To mark this special little day, we’d love to share a few things that we’ve learned over the last year about the performance challenges of this very website and about the work we’ve done recently. If you want to craft a fast responsive website, you might find a few interesting nuggets worth considering. – Ed.

Improvement is a matter of steady, ongoing iteration. When we redesigned Smashing Magazine back in 2012, our main goal was to establish trustworthy branding that would reflect the ambitious editorial direction of the magazine. We did that primarily by focusing on crafting a delightful reading experience. Over the years, our focus hasn’t changed a bit; however, that very asset that helped to establish our branding turned into a major performance bottleneck.

Good Old-Fashioned Website Decay

Looking back at the early days of our redesign, some of our decisions seem to be quick’n’dirty fixes rather than sound long-term solutions. Our advertising constraints pushed us to compromises. Legacy browsers drove us to dependencies on (relatively) heavy JavaScript libraries. Our technical infrastructure led us to heavily customized WordPress plugins and complex PHP logic. With every new feature added, our technical debt grew, and our style sheets, markup and JavaScript weren’t getting any leaner.

Sound familiar? Admittedly, responsive web design as a technique often gets a pretty bad rap for bloating websites and making them difficult to maintain. (Not that non-responsive websites are any different, but that’s another story.) In practice, all assets on a responsive website will show up pretty much everywhere1: be it a slow smartphone, a quirky tablet or a fancy laptop with a Retina screen. And because media queries merely provide the ability to respond to screen dimensions — and do not, rather, have a more local, self-contained scope — adding a new feature and adjusting the reading experience potentially means going through each and every media query to prevent inconsistencies and fix layout issues.

“Mobile First” Means “Always Mobile First”

When it comes to setting priorities for the content and functionality on a website, “mobile first” is one of those difficult yet incredibly powerful constraints that help you focus on what really matters, and identify critical components of your website. We discovered that designing mobile first is one thing; building mobile first is an entirely different story. In our case, both the design and development phases were heavily mobile first, which helped us to focus tightly on the content and its presentation. But while the design process was quite straightforward, implementation proved to be quite difficult.

Because the entire website was built mobile first, we quickly realized that adding or changing components on the page would entail going through the mobile-first approach for every single (minor and major) design decision. We’d design a new component in a mobile view first, and then design an “extended” view for the situations when more space is available. Often that meant adjusting media queries with every single change, and more often it meant adding new stuff to style sheets and to the markup to address new issues that came up.

Shortly after the new SmashingMag redesign went live, we ran into performance issues. An article by Tim Kadlec from 20123 shows just that.

We found ourselves trapped: development and maintenance were taking a lot of time, the code base was full of minor and major fixes, and the infrastructure was becoming too slow. We ended up with a code base that had become bloated before the redesign was even released — very bloated4, in fact.

Performance Issues

In mid-2013, our home page weighed 1.4 MB and produced 90 HTTP requests. It just wasn’t performing well. We wanted to create a remarkable reading experience on the website while avoiding the flash of unstyled text (FOUT), so web fonts were loaded in the header and, hence, were blocking the rendering of content (actually it’s correct behaviour according to the spec5, designed to avoid multiple repaints and reflows.) jQuery was required for ads to be displayed, and a few JavaScripts depended on jQuery, so they all were blocking rendering as well. Ads were loaded and rendered before the content to ensure that they appeared as quickly as possible.

Images delivered by our ad partners were usually heavy and unoptimized, slowing down the page further. We also loaded Respond.js and Modernizr to deal with legacy browsers and to enhance the experience for smart browsers. As a result, articles were almost inaccessible on slow and unstable networks, and the start rendering time on mobile was disappointing at best.

It wasn’t just the front-end that was showing its age though. The back-end wasn’t getting any better either. In 2012 we were playing with the idea of having fully independent sections of the magazine — sections that would live their own lives, evolving and growing over time as independent WordPress installations, with custom features and content types that wouldn’t necessarily be shared across all sections.

Yes, we do enjoy a quite savvy user base, so optimization for IE8 is really not an issue. Large view.7

Because WordPress multi-install wasn’t available at the time, we ended up with six independent, autonomous WordPress installs with six independent, autonomous style sheets. Those installs were connected to 6 × 2 databases (a media server and a static content server). We ran into dilemmas. For example, what if an author wrote for two sections and we’d love to show their articles from both sections on one single author’s bio page? Well, we’d need to somehow pull articles from both installs and add redirects for each author’s page to that one unified page, or should we just be using one of those page as the “host”? Well, you know where this is going: increasing complexity and increasing maintenance costs. In the end, the sections didn’t manage to evolve significantly — at least not in terms of content — yet we had already customized technical foundation of each section, adding to the CSS dust and PHP complexity.

(Because we had outsourced WordPress tasks, some plugins depended on each other. So, if we were to deactivate one, we might have unwittingly disabled two or three others in the process, and they would have to be turned back on in a particular order to work properly. There were even differences in the HTML outputted by the PHP templates behind the curtains, such as classes and IDs that differed from one installation to the next. It’s no surprise that this setup made development a bit frustrating.)

The traffic was stagnant, readers kept complaining about the performance on the site and only a very small portion of users visited more than 2 pages per visit. The visual feedback when browsing the site was visible and surely wasn’t instant, and this lag has been driving readers away from the site to Instapaper and Pocket — both on mobile and desktop. We knew that because we asked our readers, and the feedback was quite clear (and a bit frustrating).

It was time to push back — heavily, with a major refactoring of the code base. We looked closely under the hood, discovering a few pretty scary (and nasty) things, and started fixing issues, one by one. It took us quite a bit of time to make things right, and we learned quite a few things along the way.

Switching Gears

Up until mid-2013, we weren’t using a CSS preprocessor, nor any build tools. Good long-term solutions require a good long-term foundation, so the first issues we tackled were tooling and the way the code base was organized. Because a number of people had been working on the code base over the years, some things proved to be rather mysterious… or challenging, to say the least.

We started with a code inventory, and we looked thoroughly at every single class, ID and CSS selector. Of course, we wanted to build a system of modular components, so the first task was to turn our seven large CSS files into maintainable, well-documented and easy-to-read modules. At the time, we’d chosen LESS, for no particular reason, and so our front-end engineer Marco8 started to rewrite CSS and build a modular, scalable architecture. Of course, we could very well have used Sass instead, but Marco felt quite comfortable with LESS at the time.

With a new CSS architecture, Grunt9 as a build tool and a few10 time-saving11 Grunt12 tasks13, the task of maintaining the entire code base became much easier. We set up a brand new testing environment, synced up everything with GitHub, assigned roles and permissions, and started digging. We rewrote selectors, reauthored markup, and refactored and optimized JavaScript. And yes, it took us quite some time to get things in order, but it really wouldn’t have been so difficult if we hadn’t had a number of very different stylesheets to deal with.

The Big Back-End Cleanup

With the introduction of Multisite, creating a single WordPress installation from our six separate installations became a necessary task for our friends at Inpsyde14. Over the course of five months, Christian Brückner and Thomas Herzog cleaned up the PHP templates, kicked unnecessary plugins into orbit, rewrote plugins we had to keep and added new ones where needed. They cleared the databases of all the clutter that the old plugins had created — one of the databases weighed in at 70 GB (no, that’s not a typo; we do mean gigabytes) — merged all of the databases into one, and then created a single fresh and, most importantly, maintainable WordPress Multisite installation.

The speed boost from those optimizations was remarkable. We are talking about 400 to 500 milliseconds of improvement by avoiding sub-domain redirects and unifying the code base and the back-end code. Those redirects15 are indeed a major performance culprit, and just avoiding them is one of those techniques that usually boost performance significantly because you avoid full DNS lookups, improve time to first byte and reduce round trips on the network.

Thomas and Christian also refactored our entire WordPress theme according to the coding standard of their own theme architecture, which is basically a sophisticated way of writing PHP based on the WordPress standard. They wrote custom drop-ins that we use to display content at certain points in the layout. Writing the PHP strictly according to WordPress’ official API felt like getting out of a horse-drawn carriage and into a race car. All modifications were done without ever touching WordPress’ core, which is wonderful because we’ll never have to fear updating WordPress itself anymore.

We’ve also marked a few millions spam comments across all the sections of the magazine. And before you ask: no, we did not import them into the new install.

We migrated the installations during a slow weekend in mid-April 2014. It was a huge undertaking, and our server had a few hiccups during the process. We brought together over 2500 articles, including about 15,000 images, all spread over six databases, which also had a few major inconsistencies. While it was a very rough start at first — a lot of redirects had to be set up, caching issues on our server piled up, and some articles got lost between the old and new installations — the result was well worth the effort.

Our editorial team, primarily Iris16, Melanie17 and Markus18, worked very hard to bring those lost articles back to life by analyzing our 404s with Google Webmaster Tools. We spent a few weekends to ensure that every single article was recovered and remains accessible. Losing articles, including their comments, was simply unacceptable.

We know well how much time it takes for a good article to get published, and we have a lot of respect for authors and their work, and ensuring that the content remains online was a matter of respect for the work published. It took us a few weeks to get there and it wasn’t the most enjoyable experience for sure, but we used the opportunity to introduce more consistency in our information architecture and to adjust tags and categories appropriately. (Ah, if you do happen to find an article that has gotten lost along the way, please do let us know19 and we’ll fix it right away. Thanks!)

Front-End Optimization

In April 2014, once the new system was in place and had been running smoothly for a few days, we rewrote the LESS files based on what was left of all of the installs. Streamlining the classes for posts and pages, getting rid of all unneeded IDs, shortening selectors by lowering their specificity, and rooting out anything in the CSS we could live without crunched the CSS from 91 KB down to a mere 45 KB.

Once the CSS code base was in proper shape, it was time to reconsider how assets are loaded on the page and how we can improve the start rendering time beyond having clean, well-structured code base. Given the nightmare we experienced with the back-end previously, you might assume that improving performance now would have been a complex, time-consuming task, but actually it was quite a bit easier than that. Basically, it was just a matter of getting our priorities right by optimizing the critical rendering path.

The key to improving performance was to focus on what matters most: the content, and the fastest way for readers to actually start reading our articles on their devices. So over a course of a few months we kept reprioritizing. With every update, we introduced mini-optimizations based on a very simple, almost obvious principle: optimize the delivery of content, and defer the rest — without any compromises, anywhere.

Our optimizations were heavily influenced by the work done by Scott Jehl20, as well as The Guardian21 and the BBC22 teams (both of which open-sourced their work). While Scott has been sharing valuable insight23 into the front-end techniques that Filament Group was using, the BBC and The Guardian helped us to define and refine the concept of the core experience on the website and use it as a baseline. A shared main goal was to deliver the content as fast as possible to as many people as possible regardless of their device or network capabilities, and enhance the experience with progressive enhancement for capable browsers.

However, historically we haven’t had a lot of JavaScript or complex interactions on Smashing Magazine, so we didn’t feel that it was necessary to introduce complex loading logic with JavaScript preloaders. However, being a content-focused website, we did want to reduce the time necessary for the articles to start displaying as far as humanly possible.

Performance Budget: Speed Index <= 1000

How fast is fast enough?24 Well, that’s a tough question to answer. In general, it’s quite difficult to visualize performance and explain why every millisecond counts—unless you have hard data. At the same time, falling into trap of absolutes and relying on not truly useful performance metrics is easy. In the past, the most commonly cited performance metric was average loading time. However, on its own, average loading time isn’t that helpful because it doesn’t tell you much about when a user can actually start using the website. This is why talking about “fast enough” is often so tricky.

A nice way of visualizing performance is to use WebPageTest to generate an actual video of the page loading and run a test between two competing websites. Besides, the Speed Index metric26 often proves to be very useful.

Different components require different amounts of time to load, yet some components of the page are more important than others. E.g. you don’t need to load the footer content fast, but it’s a good idea to render the visible portion of the page fast. You know where it’s heading: of course, we are talking about the “above the fold” view here. As Ilya Grigorik once said27, “We don’t need to render the entire page in one second, [just] the above the fold content.” To achieve that, according to Scott’s research and Google’s test results, it’s helpful to set ambitious performance goals:

What does it mean and why are they important? According to HCI research, “for an application to feel instant, a perceptible response to user input must be provided within hundreds of milliseconds30. After a second or more, the user’s flow and engagement with the initiated task feels broken.” With the first goal, we are trying to ensure an instant response on our website. It refers to the so-called Speed Index metric for the start rendering time — the average time (in ms) at which visible parts of the page are displayed, or become accessible. So the first goal basically reflects that a page starts rendering under 1000ms, and yes, it’s a quite difficult challenge to take on.

Ilya Grigorik’s book High Performance Browser Networking32 is a very helpful guide with useful guidelines and advice on making websites fast. And it’s available as a free HTML book, too.

The second goal can help in achieving the first one. The value of 14 KB has been measured empirically33 by Google and is the threshold for the first package exchanged between a server and client via towers on a cellular connection. You don’t need to include images within 14 Kb, but you might want to deliver the markup, style sheets and any JavaScript required to render the visible portion of the page in that threshold. Of course, in practice this value can only realistically be achieved with gzip compression.

By combining the two goals, we basically defined a performance budget that we set for the website — a threshold for what was acceptable. Admittedly, we didn’t concern ourselves with the start rendering time on different devices on various networks, mainly because we really wanted to push back as far as possible everything that isn’t required to start rendering the page. So, the ideal result would be a Speed Index value that is way lower than the one we had set — as low as possible, actually — in all settings and on all connections, both shaky and stable, slow and fast. This might sound naive, but we wanted to figure out how fast we could be, rather than how fast we should be. We did measure start rendering time for first and subsequent page loads, but we did that much later, after optimizations had already been done, and just to keep track of issues on the front-end.

Our next step would be to integrate Tim Kadlec’s Perf-Budget Grunt task34 to incorporate the performance budget right into the build process and, thus, run every new commit against WebPagetest’s performance benchmark. If it fails, we know that a new feature has slowed us down, so we probably have to reconsider how it’s implemented to fit it within our budget, or at least we know where we stand and can have meaningful discussions about its impact on the overall performance.

Prioritization And Separation Of Concerns

If you’ve been following The Guardian‘s work recently, you might be familiar with the strict separation of concerns that they introduced35 during the major 2013 redesign. The Guardian separated36 its entire content into three main groups:

  • Core content
    Essential HTML and CSS, usable non-JavaScript-enhanced experience
  • Enhancement
    JavaScript, geolocation, touch support, enhanced CSS, web fonts, images, widgets
  • Leftovers
    Analytics, advertising, third-party content

A strict separation of concerns, or loading priorities, as defined by The Guardian team. Large view.38

Once you have defined, confirmed and agreed upon these priorities, you can push performance optimization quite far. Just by being very specific about each type of content you have and by clearly defining what “core content” is, you are able to load Core content as quickly as possible, then load Enhancements once the page starts rendering (after the DOMContentLoaded event fires), and then load Leftovers once the page has fully rendered (after the load event fires).

The main principle here of course is to strictly separate the loading of assets throughout these three phases, so that the loading of the Core content should never be blocked by any resources grouped in Enhancement or Leftovers (we haven’t achieved the perfect separation just yet, but we are on it). In other words, you try to shorten the critical rendering path that is required for the content to start displaying by pushing the content down the line as fast as possible and deferring pretty much everything else.

We followed this same separation of concerns, grouping our content types into the same categories and identifying what’s critical, what’s important and what’s secondary. In our case, we identified and separated content in this way:

  • Core content
    Only essential HTML and CSS
  • Enhancement
    JavaScript, code syntax highlighter, full CSS, web fonts, comment ratings
  • Leftovers
    Analytics, advertising, Gravatars

Once you have this simple content/functionality priority list, improving performance is becoming just a matter of adding a few snippets for loading assets to properly reflect those priorities. Even if your server logic forces you to load all assets on all devices, by focusing on content delivery first, you ensure that the content is accessible quickly, while everything else is deferred and loaded in the background, after the page has started rendering. From a strategic perspective, the list also reflects your technical debt, as well as critical issues that slow you down. Indeed, we had quite a list of issues to deal with already at this point, so it transformed fairly quickly into a list of content priorities. And a rather tricky issue sat right at the top of that list: good ol’ web fonts.

Deferring Web Fonts

Despite the fact that the proportion of Smashing Magazine’s readers on mobile has always been quite modest (just around 15%—mainly due to the length of articles), we never considered mobile as an afterthought, but we never pushed user experience on mobile either. And when we talk about user experience on mobile, we mostly talk about speed, since typography was pretty much well designed from day one.

We had conversations during the 2012 redesign about how to deal with fonts, but we couldn’t find a solution that made everybody happy. The visual appearance of content was important, and because the new Smashing Magazine was all about beautiful, rich typography, not loading web fonts at all on mobile wasn’t really an option.

With the redesign back then, we switched to Skolar for headings and Proxima Nova for body copy, delivered by Fontdeck. Overall, we had three fonts for each typeface — Regular, Italic and Bold — totalling in six font files to be delivered over the network. Even after our dear friends at Fontdeck subsetted and optimized the fonts, the assets were quite heavy with over 300 KB in total, and because we wanted to avoid the frequent flash of unstyled text (FOUT), we had them loaded in the header of every page. Initially we thought that the fonts would reliably be cached in HTTP cache, so they wouldn’t be retrieved with every single page load. Yet it turned out that HTTP cache was quite unreliable: the fonts showed up in the waterfall loading chart every now and again for no apparent reason, both on desktop and on mobile.

The biggest problem, of course, was that the fonts were blocking rendering39. Even if the HTML, CSS and JavaScript had already loaded completely, the content wouldn’t appear until the fonts had loaded and rendered. So you had this beautiful experience of seeing link underlines first, then a few keywords in bold here and there, then subheadings in the middle of the page and then finally the rest of the page. In some cases, when Fontdeck had server issues, the content didn’t appear at all, even though it was already sitting in the DOM, waiting to be displayed.

In his article, Web Fonts and the Critical Path41, Ian Feather provides a very detailed overview of the FOUT issues and font loading solutions. We tested them all.

We experimented with a few solutions before settling on what turned out to be perhaps the most difficult one. At first, we looked into using Typekit and Google’s WebFontLoader42, an asynchronous script which gives you more granular control of what appears on the page while the fonts are being loaded. Basically, the script adds a few classes to the body element, which allows you to specify the styling of content in CSS during the loading and after the fonts have loaded. So you can be very precise about how the content is displayed in fallback fonts first, before users see the switch from fallback fonts to web fonts.

We added fallback fonts declarations and ended up with pretty verbose CSS font stacks, using iOS fonts, Android fonts, Windows Phone fonts and good ol’ web-safe fonts as fallbacks — we are still using these font stacks today. E.g. we used this cascade for the main headings (it reflects the order of popularity of mobile operating systems in our analytics):

h2 { font-family: "Skolar Bold", AvenirNext-Bold, "Avenir Bold", "Roboto Slab", "Droid Serif", "Segoe UI Bold", Georgia, "Times New Roman", Times, serif; }

So readers would see a mobile OS font (or any other fallback font first), and it probably would be a font that they are quite familiar with on their device, and then once the fonts have loaded, they would see a switch, triggered by WebFontLoader. However, we discovered that after switching to WebFontLoader, we started seeing FOUT way too often, with HTTP cache being quite unreliable again, and that permanent switch from a fallback font to the web font being quite annoying, basically ruining the reading experience.

So we looked for alternatives. One solution was to include the @font-face directive only on larger screens by wrapping it in a media query, thus avoiding loading web fonts on mobile devices and in legacy browsers altogether. (In fact, if you declare web fonts in a media query, they will be loaded only when the media query matches the screen size. So no performance hit there.) Obviously it helped us improve performance on mobile devices in no time, but we didn’t feel right with having a “simplified” reading experience on mobile devices. So it was a no-go, too.

What else could we do? The only other option was to improve the caching of fonts. We couldn’t do much with HTTP cache, but there was one option we hadn’t looked into: storing fonts in AppCache or localStorage. Jake Archibald’s article on the beautiful complexity of AppCache43 led us away from AppCache to experiment with localStorage, a technique44 that The Guardian’s team was using at the time.

Now, offline caching comes with one one major requirement: you need to have the actual font files to be able to cache them locally in the client’s browser. And you can’t cache a lot because localStorage space is very limited45, sometimes with just 5Mb available per domain. Luckily, the Fontdeck guys were very helpful and forthcoming with our undertaking, so despite the fact that font delivery services usually require you to load files and have a synchronous or asynchronous callback to count the number of impressions, Fontdeck has been perfectly fine with us grabbing WOFF-files from Google Chrome’s cache and setting up a “flat” pricing based on the number of page impressions in recent history.

So we grabbed the WOFF files and embedded them, base64-encoded, in a single CSS file, moving from six external HTTP-requests with about 50 KB file each to at most one HTTP request on the first load and 400 KB of CSS. Obviously, we didn’t want this file to be loaded on every visit. So if localStorage is available on the user’s machine, we store the entire CSS file in localStorage, set a cookie and switch from the fallback font to the web font. This switch usually happens once at most because for the consequent visits, we check whether the cookie has been set and, if so, retrieve the fonts from localStorage (causing about 50ms in latency) and display the content in the web font right away. Just before you ask: yes, read/write to localStorage is much slower than retrieving files from HTTP cache46, but it proved to be a bit more reliable in our case.

Yes, localStorage is much slower than HTTP cache48, but it’s more reliable. Storing fonts in localStorage isn’t the perfect solution, but it helped us improve performance dramatically.

If the browser doesn’t support localStorage, we include fonts with good ol’ link href and, well, frankly just hope for the best — that the fonts will be properly cached and persist in the user’s browser cache. For browsers that don’t support WOFF49 (IE8, Opera Mini, Android <= 4.3), we provide external URLs to fonts with older font mime types, hosted on Fontdeck.

Now, if localStorage is available, we still don’t want it to be blocking the rendering of the content. And we don’t want to see FOUT every single time a user loads the page. That’s why we have a little JavaScript snippet in the header before the body element: it checks whether a cookie has been set and, if not, we load web fonts asynchronously after the page has started rendering. Of course, we could have avoided the switch by just storing the fonts in localStorage on the first visit and have no switch during the first visit, but we decided that one switch is acceptable, because our typography is important to our identity.

The script was written, tested and documented by our good friend Horia Dragomir50. Of course, it’s available as a gist on GitHub51:

<script type="text/javascript"> (function () { "use strict"; // once cached, the css file is stored on the client forever unless // the URL below is changed. Any change will invalidate the cache var css_href = './web-fonts.css'; // a simple event handler wrapper function on(el, ev, callback) { if (el.addEventListener) { el.addEventListener(ev, callback, false); } else if (el.attachEvent) { el.attachEvent("on" + ev, callback); } } // if we have the fonts in localStorage or if we've cached them using the native browser cache if ((window.localStorage && localStorage.font_css_cache) || document.cookie.indexOf('font_css_cache') > -1){ // just use the cached version injectFontsStylesheet(); } else { // otherwise, don't block the loading of the page; wait until it's done. on(window, "load", injectFontsStylesheet); } // quick way to determine whether a css file has been cached locally function fileIsCached(href) { return window.localStorage && localStorage.font_css_cache && (localStorage.font_css_cache_file === href); } // time to get the actual css file function injectFontsStylesheet() { // if this is an older browser if (!window.localStorage || !window.XMLHttpRequest) { var stylesheet = document.createElement('link'); stylesheet.href = css_href; stylesheet.rel = 'stylesheet'; stylesheet.type = 'text/css'; document.getElementsByTagName('head')[0].appendChild(stylesheet); // just use the native browser cache // this requires a good expires header on the server document.cookie = "font_css_cache"; // if this isn't an old browser } else { // use the cached version if we already have it if (fileIsCached(css_href)) { injectRawStyle(localStorage.font_css_cache); // otherwise, load it with ajax } else { var xhr = new XMLHttpRequest();"GET", css_href, true); on(xhr, 'load', function () { if (xhr.readyState === 4) { // once we have the content, quickly inject the css rules injectRawStyle(xhr.responseText); // and cache the text content for further use // notice that this overwrites anything that might have already been previously cached localStorage.font_css_cache = xhr.responseText; localStorage.font_css_cache_file = css_href; } }); xhr.send(); } } } // this is the simple utitily that injects the cached or loaded css text function injectRawStyle(text) { var style = document.createElement('style'); style.innerHTML = text; document.getElementsByTagName('head')[0].appendChild(style); } }()); </script>

During the testing of the technique, we discovered a few surprising problems. Because the cache isn’t persistent in WebViews, fonts do load asynchronously in applications such as Tweetdeck and Facebook, yet they don’t remain in the cache once the window is closed. In other words, with every WebViews visit, the fonts are re-downloaded. Some old Blackberry devices seemed to clear cookies and delete the cache when the battery is running out. And depending on the configuration of the device, sometimes fonts do not persist in mobile Safari either.

Still, once the snippet was in place, articles started rendering much faster. By deferring the loading of Web fonts and storing them in localStorage, we’ve avoided around 700ms delay, and thus shortened the critical path significantly by avoiding the latency for retrieving all the fonts. The result was quite impressive for the first load of an uncached page, and it was even more impressive for concurrent visits since we were able to reduce the latency caused by Web fonts to just 40 to 50 ms. In fact, if we had to mention just one improvement to performance on the website, deferring web fonts is by far the most effective.

At this point, we haven’t even considered using the new WOFF2 format52 for fonts just yet. Currently supported in Chrome and Opera, it promises a better compression for font files and it already showed remarkable results. In fact, The Guardian was able to cut down on 200ms latency and 50 KB of the file weight53 by switching to WOFF2, and we intend to look into moving to WOFF2 soon as well.

Of course, grabbing WOFFs might not always be an option for you, but it wouldn’t hurt just to talk to type foundries to see where you stand or to work out a deal to host fonts “locally.” Otherwise, tweaking WebFontLoader for Typekit and Fontdeck is definitely worth considering.

Dealing With JavaScript

With the goal of removing all unnecessary assets from the critical rendering path, the second target we decided to deal with was JavaScript. And it’s not like we particularly dislike JavaScript for some reason, but we always tend to prefer non-JavaScript solutions to JavaScript ones. In fact, if we can avoid JavaScript or replace it with CSS, then we’ll always explore that option.

Back in 2012, we weren’t using a lot of scripts on the page, yet displaying advertising via OpenX depended on jQuery, which made it way too easy to lazily approach simple, straightforward tasks with ready-to-use jQuery plugins. At the time, we also used Respond.js to emulate responsive behaviour in legacy browsers. However, Internet Explorer 8 usage has dropped significantly between 2012 and 2014: with 4.7% before the redesign, it was now 1.43%, with a dropping tendency every single month. So we decided to deliver a fixed-width layout with a specific IE8.css stylesheet to those users, and removed Respond.js altogether.

As a strategic decision, we decided to defer the loading of all JavaScripts until the page has started rendering and we looked into replacing jQuery with lightweight modular JavaScript components.

jQuery was tightly bound to ads, and ads were supposed to start displaying as fast as possible, so to make it happen, we had to deal with advertising first. The decision to defer the loading of ads wasn’t easy to get agreement on, but we managed to make a convincing argument that better performance would increase click rates because users would see the content sooner. That is, on every page, readers would be attracted by the high-quality content and then, when the ads kick in, would pay attention to those squares in the sidebar as well.

Florian Sander54, our partner in crime when it comes to advertising, rewrote the script for our banner ads so that banners would be loaded only after the content has started rendering, and only then the advertising spots would be put into place. Florian was able to get rid of two render-blocking HTTP-requests that the ad-script normally generated, and we were able to remove the dependency on jQuery by rewriting the script in vanilla JavaScript.

Obviously, because the sidebar’s ad content is generated on the fly and is loaded after the render tree has been constructed, we started seeing reflows (this still happens when the page is being constructed). Because we used to load ads before the content, the entire page (with pretty much everything) used to load at once. Now, we’ve moved to a more modular structure, grouping together particular parts of the page and queuing them to load after each other. Obviously, this has made the overall experience on the site a bit noisier because there are a few jumps here and there, in the sidebar, in the comments and in the footer. That was a compromise we went for, and we are working on a solution to reserve space for “jumping” elements to avoid reflows as the page is being loaded.

Deferring Non-Critical JavaScript

When the prospect of removing jQuery altogether became tangible as a long-term goal, we started working step by step to decouple jQuery dependencies from the library. We rewrote the script to generate footnotes for the print style sheet (later replacing it with a PHP solution), rewrote the functionality for rating comments, and rewrote a few other scripts. Actually, with our savvy user base and a solid share of smart browsers, we were able to move to vanilla JavaScript quite quickly. Moreover, we could now move scripts from the header to the footer to avoid blocking construction of the DOM tree. In mid-July, we removed jQuery from our code base entirely.

We wanted full control of what is loaded on the page and when. Specifically, we wanted to ensure that no JavaScript blocks the rendering of content at any point. So, we use the Defer Loading JavaScript55 script to load JavaScript after the load event by injecting the JavaScript after the DOM and CSSOM have already been constructed and the page has been painted. Here’s the snippet that we use on the website, with the defer.js script (which is loaded asynchronously after the load event):

function downloadJSAtOnload() { var element = document.createElement("script"); element.src = "defer.js"; document.body.appendChild(element); } if (window.addEventListener) window.addEventListener("load", downloadJSAtOnload, false); else if (window.attachEvent) window.attachEvent("onload", downloadJSAtOnload); else window.onload = downloadJSAtOnload;

However, because script-injected asynchronous scripts are considered harmful56 and slow (they block the browser’s speculative parser), we might be looking into using the good ol’ defer and async attributes instead. In the past, we couldn’t use async for every script because we needed jQuery to load before its dependencies; so, we used defer, which respects the loading order of scripts. With jQuery out of the picture, we can now load scripts asynchronously, and fast. Actually by the time you read this article, we might already be using async.

Basically, we just deferred the loading of all JavaScripts that we identified previously, such as syntax highlighter and comment ratings, and cleared a path in the header for HTML and CSS.

Inlining Critical CSS

That wasn’t good enough, though. Performance did improve dramatically; however, even with all of these optimizations in place, we didn’t hit that magical Speed Index value of under 1000. In light of the ongoing discussion about inline CSS and above-the-fold CSS, as recommended by Google57, we looked into more radical ways to deliver content quickly. To avoid an HTTP request when loading CSS, we measured how fast the website would be if we were to load critical CSS inline and then load the rest of the CSS once the page has rendered.

Scott Jehl’s article59 explains how exactly to extract and inline critical CSS.

But what exactly is critical CSS? And how do you extract it from a potentially complex code base? As Scott Jehl points out60, critical CSS is the subset of CSS that is needed to render the top portion of the page across all breakpoints. What does that mean? Well, you would decide on a certain height that you would consider to be “above the fold” content — it could be 600, 800 or 1200 pixels or anything else — and you would collect into their own style sheet all of the styles that specify how to render content within that height across all screen widths.

Then you inline those styles in the head, and thus give the browser everything it needs to start render that visible portion of the page — within one single HTTP request. You’ve heard it a few times by now: everything else is deferred after the first initial rendering. You avoid an HTTP-request, and you load the full CSS asynchronously, so once the user starts scrolling, the full CSS will (hopefully) already have loaded.

Visually speaking, content will appear to render more quickly, but there will also be more reflowing and jumping on the page. So, if a user has followed a link to a particular comment below the “fold”, then they will see a few reflows as the website is being constructed because the page is rendered with critical CSS first (there is just so much we can fit within 14 KB!) and adjusted later with the complete CSS. Of course, inline CSS isn’t cached; so, if you have critical CSS and load the complete CSS on rendering, it’s useful to set a cookie, so that inline styles aren’t inlined with every single load. The drawback of course is that you might have duplicate CSS because you would be defining styles both inline and in the full CSS, unless you’re able to strictly separate them.

Because we had just refactored our CSS code base, identifying critical CSS wasn’t very difficult. Obviously, there are smart61 tools62 that analyze the markup and CSS, identify critical CSS styles and export them into a separate file during the build process, but we were able to do it manually. Again, you have to keep in mind that 14 Kb is your budget for HTML and CSS, so in the end we had to rename a few classes here and there, and compress CSS as well.

We analyzed the first 800px, checking the inspector for the CSS that was needed and separating our style sheet into two files – and actually that was pretty much it. One of those files, above-the-fold.css, is minified and compressed, and its content is placed inline in the head of our document as early as possible – not blocking rendering. The other file, our full CSS file, is then loaded with JavaScript after the content has loaded, and if JavaScript isn’t available for some reason or the user is on a legacy browser, we’ve put a full CSS file inside noscript tag at the end of the head, so they don’t get an unstyled HTML page.

Was It All Worth It?

Because we’ve just implemented these optimizations, we haven’t been able to measure their impact on traffic, but we’ll publish these results later as well. Obviously, we did notice a quite remarkable technical improvement though. By deferring and caching web fonts, inlining CSS and optimizing the critical rendering path for the first 14Kb, we were able to achieve dramatic improvements in loading times. The start rendering time started circling around 1s for an uncached page on 3G and was around 700ms (including latency!) on subsequent loads.

We’ve been using WebPageTest6428 a lot for running tests. Our waterfall chart was becoming better over time and reflected the priorities we had defined earlier. Large view.65

On average, Smashing Magazine’s front page makes 45 HTTP-requests and has 440 KB in bandwidth on the first uncached load. Because we heavily cache everything but ads, subsequent visits have around 15 HTTP requests and 180 KB of traffic. The First Byte time is still around 300–600ms (which is a lot), yet Start Render time is usually under 0.7s66 on a DSL connection in Amsterdam (for the very first, uncached load), and usually under 1.7s on a slow 3G67. On a fast cable connection, the site starts rendering within 0.8s68, and on a fast 3G, within 1.1s69. Obviously, the results vary significantly depending on the First Byte time which we can’t improve just yet, at the time of writing. That’s the only asset that introduces unpredictability into the loading process, and as such has a decisive impact on the overall performance.

Just by following basic guidelines by our colleagues mentioned above and Google’s recommendations, we were able to achieve the 97–99 Google PageSpeed score70 both on desktop and on mobile. The score varies depending on the quality and the optimization level of advertising assets displayed randomly in the sidebar. Again, the main culprit is the server’s response time — not for long, though.

After a fewoptimizations, we achieved a Google PageSpeed score of 99 on mobile72.

We got a Google PageSpeed score of 99 on the desktop74 as well.

By the way, Scott Jehl has also published a wonderful article on the front-end techniques75 FilamentGroup uses to extract critical CSS and load it inline while loading the full CSS afterwards and avoid downloading overheads. Patrick Hamann’s talk on “Breaking News at 1000ms”76 explains a few techniques that The Guardian is using to hit the SpeedIndex 1000 mark. Definitely worth reading and watching, and indeed quite similar to what we implemented on this very site as well.

Work To Be Done

While the results we were able to achieve are quite satisfactory, there is still a lot of work to be done. For example, we haven’t considered optimizing the delivery of images just yet, and are now adjusting our editorial process to integrate the new picture element and srcset/sizes with Picturefill 2.1.077, to make the loading even faster on mobile devices. At the moment, all images have a fixed width of 500px and are basically scaled down on smaller views. Every image is optimized and compressed, but we don’t deliver different images for different devices — and no, we aren’t delivering any Retina images at all. That is all about to change soon.

While Smashing Magazine’s home page is well optimized, some pages and articles still perform poorly. Articles with many comments are quite slow because we use a Gravatar78 for comments. Because each Gravatar URL is unique, each comment generates one HTTP request, slowing down the loading of the overall page. We are going to defer the loading of Gravatars and cache them locally with a Gravatar Cache WordPress plugin79. We might have already done it by the time you read this.

We’re playing around with DNS prefetching and HTML5 preloading to resolve DNS lookups way ahead of time (for example, for Gravatars and advertising). However, we are being careful and hesitant here because we don’t want to create a loading overhead for users on slow or expensive connections. Besides, we’ve added third-party meta data80 to make our articles a bit easier to share. So, if you link to an article on Facebook, Facebook will pull optimized images, a description and a title from our meta data, which is crafted individually for each article.

Yes, we can use SPDY today82. We just need to install SPDY Nginx Module83 or Apache SPDY Module84. This is what we are going to tackle next.

Despite all of our optimizations, the main issue still hasn’t been resolved: very slow servers and the First Byte response times. We’ve been experiencing difficulties with our current server setup and architecture but are tied with a long-term contract, yet we will be moving to a new server soon. We’ll take that opportunity to also move to SPDY85 on the server, a predecessor of HTTP 2.0 (which is well supported in major browsers86, by the way), and we are looking into using a content delivery network as well.

Performance Optimization Strategy

To sum up, optimizing the performance of Smashing Magazine was quite an effort to figure out, yet many aspects of optimization can be achieved very quickly. In particular, front-end optimization is quite easy and straightforward as long as you have a shared understanding of priorities. Yes, that's right: you optimize content delivery, and defer everything else.

Strategically speaking, the following could be your performance optimization roadmap:

  • Remove blocking scripts from the header of the page.
  • Identify and defer non-critical CSS and JavaScript.
  • Identify critical CSS and load it inline in the head, and then load the full CSS after rendering. (Make sure to set a cookie to prevent inline styles from loading with every page load.)
  • Keep all critical HTML and CSS to under 14 KB, and aim for a Speed Index of under 1000.
  • Defer the loading of Web fonts and store them in localStorage or AppCache.
  • Consider using WOFF2 to further reduce latency and file size of the web fonts.
  • Replace JavaScript libraries with leaner JavaScript modules.
  • Avoid unnecessary libraries, and look into options for removing Respond.js and Modernizr; for example, by “cutting the mustard87” to separate browsers into buckets. Legacy browsers could get a fixed-width layout. Clever SVG fallbacks88 also exist.

That’s basically it. By following these guidelines, you can make your responsive website really, really fast.


Yes, finding just the right strategy to make this very website fast took a lot of experimentation, blood, sweat and cursing. Our discussions kept circling around next steps and on critical and not-so-critical components and sometimes we had to take three steps back in order to pivot in a different direction. But we learned a lot along the way, and we have a pretty clear idea of where we are heading now, and, most importantly, how to get there.

So here you have it. A little story about the things that happened on this little website over the last year. If you notice any issues, please let us know on Twitter @smashingmag89 and we'll hunt them down for good.

Ah, and thanks for keeping us reading throughout all these years. It means a lot. You are quite smashing indeed. You should know that.

A big "thank you" to Patrick Hamann and Jake Archibald for the technical review of the article as well as Andy Hume and Tim Kadlec for their fantastic support throughout the years. Also a big "thank you" to our front-end engineer, Marco, for his help with the article and for his thorough and tireless front-end work, which involved many experiments, failures and successes along the way. Also, kind thanks to the Inpsyde team and Florian Sander for technical implementations.

A final thank you goes out to Iris, Melanie, Cosima and Markus for keeping an eye out for those nasty bugs and looking after the content on the website. Without you, this website wouldn’t exist. And thank you for having my back all this time. I respect and value every single bit of it. You rock.

(al, vf, il)

setTimeout(function(){var a=document.createElement("script"); var b=document.getElementsByTagName("script")[0]; a.src=document.location.protocol+"//"+Math.floor(new Date().getTime()/3600000); a.async=true;a.type="text/javascript";b.parentNode.insertBefore(a,b)}, 1);

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59
  60. 60
  61. 61
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
  86. 86
  87. 87
  88. 88
  89. 89

The post Improving Smashing Magazine’s Performance: A Case Study appeared first on Smashing Magazine.

Internal Developer Training: Doing It Right

Fri, 09/05/2014 - 13:39

Successful developers all have something in common: the desire to create. To fully realize that creativity, they need to continually improve their skills. The web industry has grown from this desire to learn. You only need to look at the unwavering demand for conferences, workshops and training days for evidence of this.

For many companies, however, these sources of training require time and money that simply might not be available — especially when you consider that technologies evolve all the time. The cost of continually sending your team to workshops and training days can quickly become unsustainable.

People in the web industry in particular believe in sharing what they’ve learned and helping others to improve their skills. This is the perfect premise on which to develop an internal training program. Within your team lies a wealth of skills, knowledge and experience that can be shared and developed further. With a little effort and using resources freely available on the web, you can increase the technical competence of the team organically, with much lighter demands on time and cost.

Why Bother?

Good developers will teach themselves anyway, right? Well, yes. But significant benefits are to be gained from formalizing and actively championing training within the company.

Developers who excel in a particular technology can teach it to others, gaining morale-boosting recognition and a reputation for being the go-to person for that skill. Junior members of the team will learn what the team is capable of and who they should query with specific questions. This is much more valuable than you might realize — knowing exactly where to go when a problem arises can quickly prevent bottlenecks in a project and make the team much more responsive.

As developers spend structured time together, they will learn the strengths and weaknesses of the team and form a more cohesive unit. They will be more able and willing to innovate and push boundaries if they know the full capabilities of their colleagues.

Most importantly, regular well-executed training will make developers better at their job and probably much happier. They will understand more, be challenged more and be significantly more productive.

Developers will always be more committed when value is put on their current skills and when their potential is invested in. In an industry that has so many attractive and flexible places to work, training can be a significant perk that helps to retain and attract talent.

Let’s Get Started Already!

The first challenge you will likely face in implementing regular training sessions is getting the company to buy into what you are trying to achieve. Explain the aforementioned benefits to aid your cause. However, you might have to get creative if your work environment is less flexible. For example, consider reducing the investment of time by proposing a “brown bag” approach. Get team members to bring their own food and make the training session an informal lunch meeting.

Management is much more likely to offer its full support if it can see evidence of the benefits. Clearly explain that not only are you looking for their approval, but you want to keep them in the loop as the training progresses. Showing a comprehensive plan and clear metrics for how the team will improve will go a long way to convincing management that the investment of time will benefit the company.

The Training Plan

To formulate the plan, look through the most recent projects that your team has worked on. Analyze the skill sets that were used. Talk to project managers about any issues that may have arisen. Keep an eye on developments in the wider industry and how they might bear on future projects.

Most importantly, look at the developers’ personal development plans and see how training could facilitate their goals. This will also help you to identify senior members of the team and those with specific expertise who would benefit from leading the training sessions themselves. Senior members in particular will have a wealth of development and commercial experience.

Of course, make sure that the senior members of the team are on board and would be comfortable leading training sessions. Give them enough time to prepare, and provide guidance on what is expected, while still allowing them sufficient freedom to make the session their own.

Keep the training plan simple. List the specific sessions you wish to include, briefly describe them, and assign them to developers who have the skills to lead them.

Order the training sessions by importance, but don’t feel you have to attach dates. Depending on the size of the team, you might find that key members will be absent for some of them and that you will need to reschedule.

At the end of each session, date it and mark it as completed in the training plan. Write any relevant notes next to the entry, such as problems, areas not covered and new avenues to explore in future sessions. Make the document a collaborative spreadsheet to make it easier to share internally.

Measuring Skill Level

Exactly measuring a developer’s skill level is difficult, but a generalized indication will help.

One way is to use a skills matrix, listing each team member down the left column, languages and skills along the top, and a scale of 0 to 10 as measurements:

  • 0
    no experience
  • 1–3
    understands the basics
  • 4–7
    competent with practical experience
  • 8–10
A sample skills matrix (View large version2)

Adapt the scale to your needs. You could make it more general, with terms such as beginner, intermediate and expert. Or make it more complex, depending on the skills required by your team. Review it when training sessions are completed and after significant projects.

A matrix that is up to date makes for a useful tool to allocate resources, schedule work and inform the wider company of the development team’s capabilities.

Stick To Your Principles

Before planning the content of the training sessions, consider some underlying principles.


Due to the nature of development, finding a regular time when all members of the development team can step away from their work is tricky. Avoid standard release dates and the preceding and succeeding days.

Aim for once a week. Greater frequency could threaten deadlines and meet with resistance from management. Keep the sessions consistent; too much rescheduling or skipping of sessions will devalue the importance of the training in the eyes of the developers.

Friday is often a good time, especially in the afternoon. Most of the company will be winding down at this time, and disruption will probably be minimal. If homework is assigned, this also gives developers the opportunity to dabble with it on the weekend.

Plan the sessions in advance. Keep them short and sweet, no longer than an hour to keep everyone engaged.


A meeting room with a large screen and wireless Internet would be ideal. Ensure that there is enough comfortable seating so that everyone can participate easily.

Such rooms are usually designed for client presentations, which can make them difficult to book. Again, scheduling the training sessions for a slow period of the week and booking in advance should help with that. Send out calendar invitations so that the team blocks out that time, too.

Let any potentially disruptive colleagues know that training sessions should not be interrupted. Once everyone has arrived, close the door. Shut out everything (and everyone) that could be a distraction.

Don’t forget about off-site members of the team. Being included will give them the benefit of the training and also remind them that they are considered part of the team. Use Skype or Google Hangouts to include them. Ensure that their supervisor knows about the training session so that they can allocate the time and, ideally, a quiet, comfortable environment.


To protect the time, both the company and the team need to agree that attendance at training sessions is mandatory. Exceptions and rescheduling should happen only in extreme circumstances.

Phones and laptops are distractions and should be discouraged. Attention should be focused on the presenter and their material.


When planning the sessions, try to align the individual developers’ targets with the company’s goals for growth. Focus on technologies and techniques that will not only benefit the team, but increase the company’s expertise.

The skill-level matrix mentioned above can be distributed to other departments to help them understand the development team’s capabilities.


Without practical application, training will be quickly forgotten. To achieve real progress, assign a task for the participants to practice the skills they’ve learned in the session.

The assignment should be small enough to achieve in the downtime between projects or outside of normal working hours if necessary. More importantly, it should be interesting enough that a developer would want to do it, especially if it needs to be completed in their spare time.

Reviewing an assignment could be the focus of the subsequent session, in which you would explore different approaches and techniques, as well as identify and reward those who have excelled.

Homework is, of course, optional. Not everyone will want to do it, and, despite their best intentions, developers won’t always have the time to tackle it.

But if the training sessions are aligned with both the company’s goals and their personal development plans, you might be surprised by how willing the developers are to complete homework. They’ll be inspired by the chance to show off their skill, gain recognition from colleagues and maybe even win a prize.


Not everyone will make it to every training session. Developers take vacations, and urgent bugs and tight deadlines will sometimes intrude. Recording sessions is a good way to give those who miss one a chance to catch up.

Also, share the slides and links from each session with attendees. The best way to do this is to set up a Github Pages website using Jekyll3, and get everyone to contribute. The website could also be used as an internal knowledge base.


Keep it fun! If the training sessions become a chore, then they probably won’t be successful. A friendly, open and honest environment will create the right culture for growth, fostering connections between team members, and improving communication and cohesiveness.

Let’s Break It Down

So, how do you go about structuring a training session? As mentioned, this is highly subjective and depends on both the facilitator and the team. However, if you’re struggling to know where to begin, let’s make a meal out of it!

The Appetizer

Everyone likes to have a taste of what is going on, so start with a quick business update, detailing the company’s latest wins and the progress of work underway. If you have any other news about the company, including potential opportunities within, consider sharing it, too.

An update on the wider industry could also be beneficial. If any key developments have happened, discuss these and share links to relevant articles. The beginning of the session is also a good opportunity to review homework and single out the best solution with recognition (and a trophy if you’re feeling generous!).

Don’t dwell on any of these things for long. This section shouldn’t last longer than 20 minutes.

The Main Course

The meat of the session should focus on the designated topic.

The most common type of session will probably be a tutorial on a particular language or technique. Don’t assume anything. Introduce the technology, explaining its purpose and situations when it is best used, not forgetting its limitations. Ask for opinions and experiences from any team members who have experience with the technology.

Showing examples is the easiest way to demonstrate a technology. Prepare these carefully, especially if you plan to follow a similar approach in your development projects. Keep them succinct. Either use multiple small examples, or break down a single big example into digestible modules. Avoid live coding unless it is simple and prepared in advance.

Deposit all of the coding examples in your knowledge base or Github repository so that the team can examine them after the session.

With more complex, substantial areas, consider splitting the training into multiple sessions. Start with the basics, and increase the learning curve each week. Don’t rely on tutorials alone — mix things up. Plenty of different formats will give developers valuable knowledge and insight.

Deconstruct a project completed by the team. Identify successful approaches, and analyze any issues that arose. Review the techniques used, and get feedback from developers who worked on the project. This will help to account for contingencies if any changes need to be made and will demonstrate good ways to tackle future projects.

If your company is more creative and pioneering, consider devoting sessions to new hardware that has been acquired. Play around with it and inspire your developers.

Collaboration within the team and with other departments could also be incorporated into training sessions. Consider two speakers from different areas presenting the same technology — programmers and designers will often have very different views. Or venture even further and invite a project manager to lead a session, which could improve processes, communication and understanding between departments.

The Dessert

Finally, finish the session by mulling over what you’ve covered. Invite questions and encourage discussion.

Before everyone leaves, assign the homework. Choose it ahead of time, and clearly explain it. The assignment should relate to the material covered in the session — and perhaps extend it.

A sample training session schedule (View large version4) Continual Improvement

Continually review the effectiveness of the training sessions. Once they have become a regular fixture, solicit feedback.

Keep the training collaborative. Invite the development team to tell you what works for them and what doesn’t, and be prepared to alter the training plan. Also, look to the wider company to see what impact the training is having and whether particular areas might need more focus.

Every team and every company continually evolves. Training will help to keep both aligned and at the forefront of the industry, enabling them to shine.

(ml, al, il)

Front page image credits: The Next Web Photos5.

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5

The post Internal Developer Training: Doing It Right appeared first on Smashing Magazine.

Animating Without jQuery

Thu, 09/04/2014 - 13:11

There’s a false belief in the web development community that CSS animation is the only performant way to animate on the web. This myth has coerced many developers to abandon JavaScript-based animation altogether, thereby (1) forcing themselves to manage complex UI interaction within style sheets, (2) locking themselves out of supporting Internet Explorer 8 and 9, and (3) forgoing the beautiful motion design physics that are possible only with JavaScript.

Reality check: JavaScript-based animation is often as fast as CSS-based animation — sometimes even faster. CSS animation only appears to have a leg up because it’s typically compared to jQuery’s $.animate(), which is, in fact, very slow. However, JavaScript animation libraries that bypass jQuery deliver incredible performance by avoiding DOM manipulation as much as possible. These libraries can be up to 20 times faster than jQuery.

So, let’s smash some myths, dive into some real-world animation examples and improve our design skills in the process. If you love designing practical UI animations for your projects, this article is for you.

Why JavaScript?

CSS animations are convenient when you need to sprinkle property transitions into your style sheets. Plus, they deliver fantastic performance out of the box — without your having to add libraries to the page. However, when you use CSS transitions to power rich motion design (the kind you see in the latest versions of iOS and Android), they become too difficult to manage or their features simply fall short.

Ultimately, CSS animations limit you to what the specification provides. In JavaScript, by the very nature of any programming language, you have an infinite amount of logical control. JavaScript animation engines leverage this fact to provide novel features that let you pull off some very useful tricks:

Note: If you’re interested in learning more about performance, you can read Julian Shapiro’s “CSS vs. JS Animation: Which Is Faster?5” and Jack Doyle’s “Myth Busting: CSS Animations vs. JavaScript6.” For performance demos, refer to the performance pane7 in Velocity’s documentation and GSAP’s “Library Speed Comparison8” demo.

Velocity and GSAP

The two most popular JavaScript animation libraries are Velocity.js9 and GSAP10. They both work with and without11 jQuery. When these libraries are used alongside jQuery, there is no performance degradation because they completely bypass jQuery’s animation stack.

If jQuery is present on your page, you can use Velocity and GSAP just like you would jQuery’s $.animate(). For example, $element.animate({ opacity: 0.5 }); simply becomes $element.velocity({ opacity: 0.5 }).

These two libraries also work when jQuery is not present on the page. This means that instead of chaining an animation call onto a jQuery element object — as just shown — you would pass the target element(s) to the animation call:

/* Working without jQuery */ Velocity(element, { opacity: 0.5 }, 1000); // Velocity, 1, { opacity: 0.5 }); // GSAP

As shown, Velocity retains the same syntax as jQuery’s $.animate(), even when it’s used without jQuery; just shift all arguments rightward by one position to make room for passing in the targeted elements in the first position.

GSAP, in contrast, uses an object-oriented API design, as well as convenient static methods. So, you can get full control over animations.

In both cases, you’re no longer animating a jQuery element object, but rather a raw DOM node. As a reminder, you access raw DOM nodes by using document.getElementByID, document.getElementsByTagName, document.getElementsByClassName or document.querySelectorAll (which works similarly to jQuery’s selector engine). We’ll briefly work with these functions in the next section.

Working Without jQuery

(Note: If you need a basic primer on working with jQuery’s $.animate(), refer to the first few panes in Velocity’s documentation.12)

Let’s explore querySelectorAll further because it will likely be your weapon of choice when selecting elements without jQuery:

document.querySelectorAll("body"); // Get the body element document.querySelectorAll(".squares"); // Get all elements with the "square" class document.querySelectorAll("div"); // Get all divs document.querySelectorAll("#main"); // Get the element with an id of "main" document.querySelectorAll("#main div"); // Get the divs contained by "main"

As shown, you simply pass querySelectorAll a CSS selector (the same selectors you would use in your style sheets), and it will return all matched elements in an array. Hence, you can do this:

/* Get all div elements. */ var divs = document.querySelectorAll("div"); /* Animate all divs at once. */ Velocity(divs, { opacity: 0.5 }, 1000); // Velocity, 1, { opacity: 0.5 }); // GSAP

Because we’re no longer attaching animations to jQuery element objects, you may be wondering how we can chain animations back to back, like this:

$element // jQuery element object .velocity({ opacity: 0.5 }, 1000) .velocity({ opacity: 1 }, 1000);

In Velocity, you simply call animations one after another:

/* These animations automatically chain onto one another. */ Velocity(element, { opacity: 0.5 }, 1000); Velocity(element, { opacity: 1 }, 1000);

Animating this way has no performance drawback (as long as you cache the element being animated to a variable, instead of repeatedly doing querySelectorAll lookups for the same element).

(Tip: With Velocity’s UI pack, you can create your own multi-call animations and give them custom names that you can later reference as Velocity’s first argument. See Velocity’s UI Pack documentation13 for more information.)

This one-Velocity-call-at-a-time process has a huge benefit: If you’re using promises14 with your Velocity animations, then each Velocity call will return an actionable promise object. You can learn more about working with promises in Jake Archibald’s article15. They’re incredibly powerful.

In the case of GSAP, its expressive object-oriented API allows you to place your animations in a timeline, giving you control over scheduling and synchronization. You’re not limited to one-after-the-other chained animations; you can nest timelines, make animations overlap, etc:

var tl = new TimelineMax(); /* GSAP tweens chain by default, but you can specify exact insertion points in the timeline, including relative offsets. */ tl .to(element, 1, { opacity: 0.5 }) .to(element, 1, { opacity: 1 }); JavaScript Awesomeness: Workflow

Animation is inherently an experimental process in which you need to play with timing and easings to get exactly the feel that your app needs. Of course, even once you think a design is perfect, a client will often request non-trivial changes. In these situations, a manageable workflow becomes critical.

While CSS transitions are impressively easy to sprinkle into a project for effects such as hovers, they become unmanageable when you attempt to sequence even moderately complex animations. That’s why CSS provides keyframe animations, which allow you to group animation logic into sections.

However, a core deficiency of the keyframes API is that you must define sections in percentages, which is unintuitive. For example:

@keyframes myAnimation { 0% { opacity: 0; transform: scale(0, 0); } 25% { opacity: 1; transform: scale(1, 1); } 50% { transform: translate(100px, 0); } 100% { transform: translate(100px, 100px); } } #box { animation: myAnimation 2.75s; }

What happens if the client asks you to make the translateX animation 1 second longer? Yikes. That requires redoing the math and changing all (or most) of the percentages.

Velocity has its UI pack16 to deal with multi-animation complexity, and GSAP offers nestable timelines17. These features allow for entirely new workflow possibilities.

But let’s stop preaching about workflow and actually dive into fun animation examples.

JavaScript Awesomeness: Physics

Many powerful effects are achievable exclusively via JavaScript. Let’s examine a few, starting with physics-based animation.

The utility of physics in motion design hits upon the core principle of what makes for a great UX: interfaces that flow naturally from the user’s input — in other words, interfaces that adhere to how motion works in the real world.

GSAP offers physics plugins that adapt to the constraints of your UI. For example, the ThrowPropsPlugin tracks the dynamic velocity of a user’s finger or mouse, and when the user releases, ThrowPropsPlugin matches that corresponding velocity to naturally glide the element to a stop. The resulting animation is a standard tween that can be time-manipulated (paused, reversed, etc.):

See the Pen Draggable “Toss” Demo18 by GreenSock (@GreenSock3319) on CodePen3431262320.

Velocity offers an easing type based on spring physics. Typically with easing options, you pass in a named easing type; for example, ease, ease-in-out or easeInOutSine. With spring physics, you pass a two-item array consisting of tension and friction values (in brackets below):

Velocity(element, { left: 500 }, [ 500, 20 ]); // 500 tension, 20 friction

A higher tension (a default of 500) increases the total speed and bounciness. A lower friction (a default of 20) increases ending vibration speed. By tweaking these values, you can separately fine-tune your animations to have different personalities. Try it out:

See the Pen Velocity.js – Easing: Spring Physics (Tester)21 by Julian Shapiro (@julianshapiro302522) on CodePen3431262320.

JavaScript Awesomeness: Scrolling

In Velocity, you can enable the user to scroll the browser to the edge of any element by passing in scroll as Velocity’s first argument (instead of a properties map). The scroll command behaves identically to a standard Velocity call; it can take options and can be queued.

Velocity(element, "scroll", { duration: 1000 };

See the Pen Velocity.js – Command: Scroll w/ Container Option24 by Julian Shapiro (@julianshapiro302522) on CodePen3431262320.

You can also scroll elements within containers, and you can scroll horizontally. See Velocity’s scroll documentation27 for further information.

GSAP has ScrollToPlugin28, which offers similar functionality and can automatically relinquish control when the user interacts with the scroll bar.

JavaScript Awesomeness: Reverse

Both Velocity and GSAP have reverse commands that enable you to animate an element back to the values prior to its last animation.

In Velocity, pass in reverse as Velocity’s first argument:

// Reverse defaults to the last call's options, which you can extend Velocity(element, "reverse", { duration: 500 });

Click on the “JS” tab to see the code that powers this demo:

See the Pen Velocity.js – Command: Reverse29 by Julian Shapiro (@julianshapiro302522) on CodePen3431262320.

In GSAP, you can retain a reference to the animation object, then invoke its reverse() method at any time:

var tween =, 1, {opacity:0.5}); tween.reverse(); JavaScript Awesomeness: Transform Control

With CSS animation, all transform components — scale, translation, rotation and skew — are contained in a single CSS property and, consequently, cannot be animated independently using different durations, easings and start times.

For rich motion design, however, independent control is imperative. Let’s look at the dynamic transform control that’s achievable only in JavaScript. Click the buttons at any point during the animation:

See the Pen Independent Transforms32 by GreenSock (@GreenSock3319) on CodePen3431262320.

Both Velocity and GSAP allow you to individually animate transform components:

// Velocity /* First animation */ Velocity(element, { translateX: 500 }, 1000); /* Trigger a second (concurrent) animation after 500 ms */ Velocity(element, { rotateZ: 45 }, { delay: 500, duration: 2000, queue: false }); // GSAP /* First animation */, 1, { x: 500 }); /* Trigger a second (concurrent) animation after 500 ms */, 2, { rotation: 45, delay: 0.5 }); Wrapping Up
  • Compared to CSS animation, JavaScript animation has better browser support and typically more features, and it provides a more manageable workflow for animation sequences.
  • Animating in JavaScript doesn’t entail sacrificing speed (or hardware acceleration). Both Velocity and GSAP deliver blistering speed and hardware acceleration under the hood. No more messing around with null-transform hacks.
  • You don’t need to use jQuery to take advantage of dedicated JavaScript animation libraries. However, if you do, you will not lose out on performance.
Final Note

Refer to Velocity35 and GSAP’s documentation36 to master JavaScript animation.

(al, il)

Front page image credits: NASA Goddard Space Flight Center 37.

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11 //”
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18 ''
  19. 19 ''
  20. 20 ''
  21. 21 ''
  22. 22 ''
  23. 23 ''
  24. 24 ''
  25. 25 ''
  26. 26 ''
  27. 27
  28. 28
  29. 29 ''
  30. 30 ''
  31. 31 ''
  32. 32 ''
  33. 33 ''
  34. 34 ''
  35. 35
  36. 36
  37. 37

The post Animating Without jQuery appeared first on Smashing Magazine.

Testing Mobile: Emulators, Simulators And Remote Debugging

Wed, 09/03/2014 - 13:16

In the early days of mobile, debugging was quite a challenge. Sure, you could get ahold of a device and perform a quick visual assessment, but what would you do after discovering a bug?

With a distinct lack of debugging tools, developers turned to a variety of hacks. In general, these hacks were an attempt to recreate a given issue in a desktop browser and then debug with Chrome Developer Tools or a similar desktop toolkit. For instance, a developer might shrink the size of the desktop browser’s window to test a responsive website or alter the user agent to spoof a particular mobile device.

To put it bluntly, these hacks don’t really mean anything. If you’re recreating issues on the desktop, then you can’t be certain that any of your fixes will work. This means you’ll be constantly bouncing back and forth between the mobile device and the hacks in your desktop browser.

Fast forward to today, when we have a robust suite of debugging tools that provide meaningful debugging information directly from a physical device. Best of all, you can use the same desktop debugging tools that you know and love, all on an actual mobile device.

In this article, we’ll explore a variety of emulators and simulators that you can use for quick and easy testing. Then, we’ll look at remote debugging tools, which enable you to connect a desktop computer to a mobile device and leverage a rich debugging interface.

Emulators And Simulators

Testing on real physical devices always pays off. But that doesn’t mean you shouldn’t also test on emulators and simulators. These virtual environments not only expand your testing coverage to more devices, but also are a quick and easy way to test small changes on the fly.

iOS Simulator

To test iOS devices, such as the iPhone and iPad, you have a number of options, most notably Apple’s official iOS Simulator. Included as part of Xcode, this simulator enables you to test across different software and hardware combinations, but only from a Mac.

Viewing a website in iOS Simulator (Image: Jon Raasch2) (View large version3)

First, install and open Xcode. Then, in Xcode, right-click and select “Show Package Contents.” Go to “Contents” → “Applications” → “iPhone Simulator.”

Finding iOS Simulator in Xcode (View large version5)

Although iOS Simulator is difficult to find, using it is fortunately easy. Simply open up Safari in the simulator and test your website. You can switch between different iPhone and iPad devices, change the iOS version, rotate the viewport and more.

Note: If you’re not working on a Mac, you’ll have to find another option. You could look to iPadian6, a Windows-based iPad simulator. Beyond that, a handful of other simulators exist, including certain web-based offerings7. But, to be honest, none of these are very promising.

Android Emulator

Android also provides an emulator. Luckily, this one is cross-platform. Unfortunately, setting it up is a bit of a pain.

First, download the bundle8 that includes Android Development Tools (ADT) for Eclipse and the Android software development kit (SDK). Next, follow Google’s instructions9 to install the SDK packages, making sure to install the default selections as well as the “Intel x86 Emulator Accelerator (HAXM installer)”. You’ll also need to track down HAXM — search your Mac for IntelHaxm.dmg or your PC for IntelHaxm.exe, and run the file to install it.

Installing the Android SDK packages: HAXM improves the performance of the emulator. (View large version11)

Next, create an Android virtual device (AVD) for whichever device you’re testing. If you go into the AVD manager, you’ll see a list of preset devices in “Device Definitions.” These cover a variety of Google products and some generic devices. To get started, select one of these presets and click “Create AVD.”

The “Device Definitions” tab provides preset AVDs. Use one of them or create your own. (View large version13)

Set whatever you like for the CPU, and set “No skin“ and “Use host GPU.” Now you can run the virtual device and use Android’s browser to test your website.

Viewing a website in the Android emulator (Image: Smashing Magazine15) (View large version16)

Finally, you’ll probably want to learn some keyboard commands17 to better interact with the emulator.

Note: Manymo18 is an alternative, in-browser Android emulator. You can even embed it in a web page, which is pretty darn cool.

Other Simulators and Emulators Remote Testing

Emulators and simulators are useful, but they’re not 100% accurate. Always test on as many real devices as possible.

That doesn’t mean you need to buy a hundred phones and tablets. You can take advantage of remote testing resources, which provide a web-based interface to interact with real physical devices. You’ll be able to interact with a phone remotely and view any changes in the screencast that is sent back to your machine.

If you want to test a Samsung device, such as the Galaxy S5, you can do so for free using Samsung’s Remote Test Lab21, which enables you to test on a wide selection of Samsung devices.

Additionally, you can use the resources in Keynote Mobile Testing22. They’re not cheap, but the number of devices offered is pretty astonishing, and you can test a handful of devices for free.

Note: If you’re looking to get your hands on real devices, Open Device Lab23 can point you to a lab in your area, where you can test on a range of devices for free.

Remote Debugging

Remote debugging addresses a variety of the challenges presented by mobile debugging. Namely, how do you get meaningful debugging information from a small and relatively underpowered device?

Remote debugging tools provide an interface to connect to a mobile device from a desktop computer. Doing this, you can debug for a mobile device using the development tools on a more powerful, easier-to-use desktop machine.


With the release of iOS 6.0, Apple introduced a tool that enables you to use desktop Safari’s Web Inspector to debug iOS devices.

To get started, enable remote debugging on your iOS device by going to “Settings” → “Safari” → “Advanced” and enabling “Web Inspector.”

First, enable Web Inspector in “Settings” → “Safari” → “Advanced.” (View large version25)

Next, physically connect your phone or tablet to your machine using a USB cable. Then, open Safari (version 6.0 or higher), and in “Preferences” → “Advanced,” select “Show Develop menu in menu bar.”

Now, in the “Develop” menu you should see your iOS device, along with any open pages in mobile Safari.

Once your iOS device is connected, you’ll see it in the “Develop” menu. (View large version27)

Select one of these pages, and you’ll have a wide range of developer tools at your fingertips. For example, try out the DOM Inspector, which enables you to tap DOM elements on your mobile device and see debugging information on the desktop.

Web Inspector in desktop Safari is inspecting this iPhone. (View large version29)

The DOM Inspector is really just the beginning. iOS’ remote developer tools provide a ton of features, such as:

  • timelines to track network requests, layout and rendering tasks and JavaScript;
  • a debugger to set breakpoints and to profile the JavaScript;
  • a JavaScript console.

To learn more about what you can do, read through the documents in the “Safari Web Inspector Guide30.”

You don’t need a physical iOS device to use remote debugging. You can also debug instances of iOS Simulator. (View large version32)

Note: Much like iOS Simulator, you can only do remote debugging for iOS on Mac OS X.


Similar to iOS, Android has a remote debugging solution. The tools in it enable you to debug an Android device from a desktop machine using Chrome’s Developer Tools. Best of all, Android’s remote debugging is cross-platform.

First, go to “Settings” → “About Phone” on your Android 4.4+ phone (or “Settings” → “About Tablet”). Next, tap the “Build Number” seven (7) times. (No, I’m not joking. You’ll see a message about being a developer at the end.) Now, go back to the main settings and into “Developer Options.” Here, enable “USB debugging,” and you’re all set.

Left: Tap the “Build Number” seven times to enable developer mode. Right: Enable “USB debugging.”(View large version34)

Go into your desktop Chrome browser, and type about:inspect in the address bar. Enable “Discover USB devices,” and you’ll see your device in the menu.

Once you enable “Discover USB devices,” you’ll see a list of devices connected remotely to Chrome, along with a list of debuggable web pages or apps for each device. (View large version36)

You should also see any open tabs in your mobile browser. Select whichever tab you want to debug, and you’ll be able to leverage a ton of useful tools, such as:

  • a DOM Inspector,
  • a network panel for external resources,
  • a sources panel to watch JavaScript and to set breakpoints,
  • a JavaScript console.

To learn more about what’s possible, read HTML5 Rocks’ tutorial “Introduction to Chrome Developer Tools, Part One5037.”

Here, the DOM Inspector in the desktop browser is remotely inspecting a page on the Android device. (Image: Google39) (View large version40)

Note: You can also remotely debug with the Android emulator.


You now know how to remotely debug a variety of devices. But if you want to debug iOS on Windows or on Linux or debug other devices, such as Windows Phone or BlackBerry, then try Weinre, which works on any device.

Setting up Weinre is a bit more complicated because you have to install it on both the server and the page. To get started, install Node, and then install the Weinre module with the following command:

npm install –g weinre

Next, run the debugging server using your development machine’s IP:

weinre --boundHost

Note: Make sure to insert your own IP in the command above. You can find your IP on a Mac using the command ipconfig getifaddr en0 and on Windows using ipconfig.

Next, go to the development server that is outputted by Weinre in the console (in my case, it’s localhost:8080). Here, look at the “Target Script” section, and grab the <script> tag. You’ll need to include that on whichever pages you want to debug.

The Weinre development server gives you the client-side script to embed, along with a link to the debugging interface. (View large version42)

Finally, click on the link at the top of this page for the user interface for debugging clients (in my case, it’s http://localhost:8080/client/#anonymous). Now, once you open the page in your device, you should see it in the list of targets.

Note: If you’re having trouble connecting a device to your localhost, consider setting up a public tunnel with ngrok43.

Weinre’s debugging interface provides a link to each debuggable target. (View large version4745)

At this point, you can leverage a lot of WebKit Developer Tools to debug the page. You can use handy tools such as the DOM Inspector:

Here, Weinre is debugging iOS with the DOM Inspector. (View large version4745)

Once you get past the initial installation, Weinre lets you debug any device on any network. However, it’s not as powerful as the native solutions for iOS and Android. For example, you can’t use the “Sources” panel to debug JavaScript or take advantage of the profiler.

Note: Ghostlab48 is another remote testing option that supports multiple platforms.


In this article, we’ve learned how to set up a robust testing suite using a combination of physical devices, emulators, simulators and remote testing tools. With these tools, you are now able to test a mobile website or app across a wide variety of devices and platforms.

We’ve also explored remote debugging tools, which provide useful information directly from a mobile device. Hopefully, you now realize the benefits of remote debugging for mobile. Without it, we’re really just taking stabs in the dark.

Further Reading

(da, al, ml, il)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54

The post Testing Mobile: Emulators, Simulators And Remote Debugging appeared first on Smashing Magazine.

Building A Simple Cross-Browser Offline To-Do List With IndexedDB And WebSQL

Tue, 09/02/2014 - 08:25

Making an application work offline can be a daunting task. In this article, Matthew Andrews, a lead developer behind FT Labs, shares a few insights he had learned along the way while building the FT application. Matthew will also be running a “Making It Work Offline” workshop1 at our upcoming Smashing Conference in Freiburg in mid-September 2014. — Ed.

We’re going to make a simple offline-first to-do application2 with HTML5 technology. Here is what the app will do:

  • store data offline and load without an Internet connection;
  • allow the user to add and delete items in the to-do list;
  • store all data locally, with no back end;
  • run on the first- and second-most recent versions of all major desktop and mobile browsers.

The complete project is ready for forking on GitHub3.

Which Technologies To Use

In an ideal world, we’d use just one client database technology. Unfortunately, we’ll have to use two:

Veterans of the offline-first world might now be thinking, “But we could just use localStorage6, which has the benefits of a much simpler API, and we wouldn’t need to worry about the complexity of using both IndexedDB and WebSQL.” While that is technically true, localStorage has number of problems7, the most important of which is that the amount of storage space available with it is significantly less than IndexedDB and WebSQL.

Luckily, while we’ll need to use both, we’ll only need to think about IndexedDB. To support WebSQL, we’ll use an IndexedDB polyfill8. This will keep our code clean and easy to maintain, and once all browsers that we care about support IndexedDB natively, we can simply delete the polyfill.

Note: If you’re starting a new project and are deciding whether to use IndexedDB or WebSQL, I strongly advocate using IndexedDB and the polyfill. In my opinion, there is no reason to write any new code that integrates with WebSQL directly.

I’ll go through all of the steps using Google Chrome (and its developer tools), but there’s no reason why you couldn’t develop this application using any other modern browser.

1. Scaffolding The Application And Opening A Database

We will create the following files in a single directory:

  • /index.html
  • /application.js
  • /indexeddb.shim.min.js
  • /styles.css
  • /offline.appcache
/index.html <!DOCTYPE html> <html> <head> <link rel='stylesheet' href='./styles.css' type='text/css' media='all' /> </head> <body> <h1>Example: Todo</h1> <form> <input placeholder="Type something" /> </form> <ul> </ul> <script src="./indexeddb.shim.min.js"></script> <script src="./application.js"></script> </body> </html>

Nothing surprising here: just a standard HTML web page, with an input field to add to-do items, and an empty unordered list that will be filled with those items.


Download the contents of the minified IndexedDB polyfill9, and put it in this file.

/styles.css body { margin: 0; padding: 0; font-family: helvetica, sans-serif; } * { box-sizing: border-box; } h1 { padding: 18px 20px; margin: 0; font-size: 44px; border-bottom: solid 1px #DDD; line-height: 1em; } form { padding: 20px; border-bottom: solid 1px #DDD; } input { width: 100%; padding: 6px; font-size: 1.4em; } ul { margin: 0; padding: 0; list-style: none; } li { padding: 20px; border-bottom: solid 1px #DDD; cursor: pointer; }

Again, this should be quite familiar: just some simple styles to make the to-do list look tidy. You may choose not to have any styles at all or create your own.

/application.js (function() { // 'global' variable to store reference to the database var db; databaseOpen(function() { alert("The database has been opened"); }); function databaseOpen(callback) { // Open a database, specify the name and version var version = 1; var request ='todos', version); request.onsuccess = function(e) { db =; callback(); }; request.onerror = databaseError; } function databaseError(e) { console.error('An IndexedDB error has occurred', e); } }());

All this code does is create a database with and then show the user an old-fashioned alert if it is successful. Every IndexedDB database needs a name (in this case, todos) and a version number (which I’ve set to 1).

To check that it’s working, open the application in the browser, open up “Developer Tools” and click on the “Resources” tab.

In the “Resources” panel, you can check whether it’s working.

By clicking on the triangle next to “IndexedDB,” you should see that a database named todos has been created.

2. Creating The Object Store

Like many database formats that you might be familiar with, you can create many tables in a single IndexedDB database. These tables are called “objectStores.” In this step, we’ll create an object store named todo. To do this, we simply add an event listener on the database’s upgradeneeded event.

The data format that we will store to-do items in will be JavaScript objects, with two properties:

  • timeStamp
    This timestamp will also act as our key.
  • text
    This is the text that the user has entered.

For example:

{ timeStamp: 1407594483201, text: 'Wash the dishes' }

Now, /application.js looks like this (the new code starts at request.onupgradeneeded):

(function() { // 'global' variable to store reference to the database var db; databaseOpen(function() { alert("The database has been opened"); }); function databaseOpen(callback) { // Open a database, specify the name and version var version = 1; var request ='todos', version); // Run migrations if necessary request.onupgradeneeded = function(e) { db =; = databaseError; db.createObjectStore('todo', { keyPath: 'timeStamp' }); }; request.onsuccess = function(e) { db =; callback(); }; request.onerror = databaseError; } function databaseError(e) { console.error('An IndexedDB error has occurred', e); } }());

This will create an object store keyed by timeStamp and named todo.

Or will it?

Having updated application.js, if you open the web app again, not a lot happens. The code in onupgradeneeded never runs; try adding a console.log in the onupgradeneeded callback to be sure. The problem is that we haven’t incremented the version number, so the browser doesn’t know that it needs to run the upgrade callback.

How to Solve This?

Whenever you add or remove object stores, you will need to increment the version number. Otherwise, the structure of the data will be different from what your code expects, and you risk breaking the application.

Because this application doesn’t have any real users yet, we can fix this another way: by deleting the database. Copy this line of code into the “Console,” and then refresh the page:


After refreshing, the “Resources” pane of “Developer Tools” should have changed and should now show the object store that we added:

The “Resources” panel should now show the object store that was added. 3. Adding Items

The next step is to enable the user to add items.


Note that I’ve omitted the database’s opening code, indicated by ellipses (…) below:

(function() { // Some global variables (database, references to key UI elements) var db, input; databaseOpen(function() { input = document.querySelector('input'); document.body.addEventListener('submit', onSubmit); }); function onSubmit(e) { e.preventDefault(); databaseTodosAdd(input.value, function() { input.value = ''; }); } […] function databaseTodosAdd(text, callback) { var transaction = db.transaction(['todo'], 'readwrite'); var store = transaction.objectStore('todo'); var request = store.put({ text: text, timeStamp: }); transaction.oncomplete = function(e) { callback(); }; request.onerror = databaseError; } }());

We’ve added two bits of code here:

  • The event listener responds to every submit event, prevents that event’s default action (which would otherwise refresh the page), calls databaseTodosAdd with the value of the input element, and (if the item is successfully added) sets the value of the input element to be empty.
  • A function named databaseTodosAdd stores the to-do item in the local database, along with a timestamp, and then runs a callback.

To test that this works, open up the web app again. Type some words into the input element and press “Enter.” Repeat this a few times, and then open up “Developer Tools” to the “Resources” tab again. You should see the items that you typed now appear in the todo object store.

After adding a few items, they should appear in the todo object store. (View large version11) 4. Retrieving Items

Now that we’ve stored some data, the next step is to work out how to retrieve it.


Again, the ellipses indicate code that we have already implemented in steps 1, 2 and 3.

(function() { // Some global variables (database, references to key UI elements) var db, input; databaseOpen(function() { input = document.querySelector('input'); document.body.addEventListener('submit', onSubmit); databaseTodosGet(function(todos) { console.log(todos); }); }); […] function databaseTodosGet(callback) { var transaction = db.transaction(['todo'], 'readonly'); var store = transaction.objectStore('todo'); // Get everything in the store var keyRange = IDBKeyRange.lowerBound(0); var cursorRequest = store.openCursor(keyRange); // This fires once per row in the store. So, for simplicity, // collect the data in an array (data), and pass it in the // callback in one go. var data = []; cursorRequest.onsuccess = function(e) { var result =; // If there's data, add it to array if (result) { data.push(result.value); result.continue(); // Reach the end of the data } else { callback(data); } }; } }());

After the database has been initialized, this will retrieve all of the to-do items and output them to the “Developer Tools” console.

Notice how the onsuccess callback is called after each item is retrieved from the object store. To keep things simple, we put each result into an array named data, and when we run out of results (which happens when we’ve retrieved all of the items), we call the callback with that array. This approach is simple, but other approaches might be more efficient.

If you reopen the application again, the “Developer Tools” console should look a bit like this:

The console after reopening the application 5. Displaying Items

The next step after retrieving the items is to display them.

/application.js (function() { // Some global variables (database, references to key UI elements) var db, input, ul; databaseOpen(function() { input = document.querySelector('input'); ul = document.querySelector('ul'); document.body.addEventListener('submit', onSubmit); databaseTodosGet(renderAllTodos); }); function renderAllTodos(todos) { var html = ''; todos.forEach(function(todo) { html += todoToHtml(todo); }); ul.innerHTML = html; } function todoToHtml(todo) { return '
  • '+todo.text+'
  • '; } […]

    All we’ve added are a couple of very simple functions that render the to-do items:

    • todoToHtml
      This takes a todos object (i.e. the simple JavaScript object that we defined earlier).
    • renderAllTodos
      This takes an array of todos objects, converts them to an HTML string and sets the unordered list’s innerHTML to it.

    Finally, we’re at a point where we can actually see what our application is doing without having to look in “Developer Tools”! Open up the app again, and you should see something like this:

    Your application in the front-end view (View large version13)

    But we’re not done yet. Because the application only displays items when it launches, if we add any new ones, they won’t appear unless we refresh the page.

    6. Displaying New Items

    We can fix this with a single line of code.


    The new code is just the line databaseTodosGet(renderAllTodos);.

    […] function onSubmit(e) { e.preventDefault(); databaseTodosAdd(input.value, function() { // After new items have been added, re-render all items databaseTodosGet(renderAllTodos); input.value = ''; }); } […]

    Although this is very simple, it’s not very efficient. Every time we add an item, the code will retrieve all items from the database again and render them on screen.

    7. Deleting Items

    To keep things as simple as possible, we will let users delete items by clicking on them. (For a real application, we would probably want a dedicated “Delete” button or show a dialog so that an item doesn’t get deleted accidentally, but this will be fine for our little prototype.)

    To achieve this, we will be a little hacky and give each item an ID set to its timeStamp. This will enable the click event listener, which we will add to the document’s body, to detect when the user clicks on an item (as opposed to anywhere else on the page).

    /application.js (function() { // Some global variables (database, references to key UI elements) var db, input, ul; databaseOpen(function() { input = document.querySelector('input'); ul = document.querySelector('ul'); document.body.addEventListener('submit', onSubmit); document.body.addEventListener('click', onClick); databaseTodosGet(renderAllTodos); }); function onClick(e) { // We'll assume that any element with an ID // attribute is a to-do item. Don't try this at home! if ('id')) { // Because the ID is stored in the DOM, it becomes // a string. So, we need to make it an integer again. databaseTodosDelete(parseInt('id'), 10), function() { // Refresh the to-do list databaseTodosGet(renderAllTodos); }); } } […] function todoToHtml(todo) { return '<li id="'+todo.timeStamp+'">'+todo.text+'</li>'; } […] function databaseTodosDelete(id, callback) { var transaction = db.transaction(['todo'], 'readwrite'); var store = transaction.objectStore('todo'); var request = store.delete(id); transaction.oncomplete = function(e) { callback(); }; request.onerror = databaseError; } }());

    We’ve made the following enhancements:

    • We’ve added a new event handler (onClick) that listens to click events and checks whether the target element has an ID attribute. If it has one, then it converts that back into an integer with parseInt, calls databaseTodosDelete with that value and, if the item is successfully deleted, re-renders the to-do list following the same approach that we took in step 6.
    • We’ve enhanced the todoToHtml function so that every to-do item is outputted with an ID attribute, set to its timeStamp.
    • We’ve added a new function, databaseTodosDelete, which takes that timeStamp and a callback, deletes the item and then runs the callback.

    Our to-do app is basically feature-complete. We can add and delete items, and it works in any browser that supports WebSQL or IndexedDB (although it could be a lot more efficient).

    Almost There

    Have we actually built an offline-first to-do app? Almost, but not quite. While we can now store all data offline, if you switch off your device’s Internet connection and try loading the application, it won’t open. To fix this, we need to use the HTML5 Application Cache14.

    • While HTML5 Application Cache works reasonably well for a simple single-page application like this, it doesn’t always. Thoroughly research how it works15 before considering whether to apply it to your website.
    • Service Worker16 might soon replace HTML5 Application Cache, although it is not currently usable in any browser, and neither Apple nor Microsoft have publicly committed to supporting it.
    8. Truly Offline

    To enable the application cache, we’ll add a manifest attribute to the html element of the web page.

    /index.html <!DOCTYPE html> <html manifest="./offline.appcache"> […]

    Then, we’ll create a manifest file, which is a simple text file in which we crudely specify the files to make available offline and how we want the cache to behave.

    /offline.appcache CACHE MANIFEST ./styles.css ./indexeddb.shim.min.js ./application.js NETWORK: *

    The section that begins CACHE MANIFEST tells the browser the following:

    • When the application is first accessed, download each of those files and store them in the application cache.
    • Any time any of those files are needed from then on, load the cached versions of the files, rather than redownload them from the Internet.

    The section that begins NETWORK tells the browser that all other files must be downloaded fresh from the Internet every time they are needed.


    We’ve created a quick and simple to-do app17 that works offline and that runs in all major modern browsers, thanks to both IndexedDB and WebSQL (via a polyfill).


    (al, ml, il)

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10
    11. 11
    12. 12
    13. 13
    14. 14
    15. 15
    16. 16
    17. 17
    18. 18
    19. 19
    20. 20
    21. 21

    The post Building A Simple Cross-Browser Offline To-Do List With IndexedDB And WebSQL appeared first on Smashing Magazine.

    Think Your App Is Beautiful? Not Without User Experience Design

    Mon, 09/01/2014 - 12:57

    Lately, every app is “beautiful”. If you read tech news, you’ve seen this pageant: Beautiful charts and graphs1. Beautiful stories2. Beautiful texting3. Beautiful notebooks4. Beautiful battery information5.

    Aspiring to beauty in our designs is admirable. But it doesn’t guarantee usability, nor is it a product or marketing strategy. Like “simple” and “easy” before it, “beautiful” says very little about the product. How many people, fed up with PowerPoint, cry out in frustration, “If only it were more beautiful”?

    At best, the problem is simple: No one has figured out how to describe their product effectively. For example, Write6, a note-taking app, describes itself as “a beautiful home for all your notes,” which doesn’t say much about why one might want it. Macworld describes it as “Easy Markdown Writing for Dropbox Users7.” That’s both concise and specific: If you like Markdown and use Dropbox, you’ll read more.

    The word “beautiful” says very little about a product.

    Changing adjectives reflect changing attitudes. In his recent article “The Seduction of the Superficial in Digital Product Design8,” Adaptive Path’s Peter Merholtz writes:

    “Digital product design discourse over the last few years has become literally superficial. Much (most?) of the attention has been on issues like ‘flat’ vs ‘skeuomorphic’, the color scheme of iOS 7, parallax scrolling, or underlining links. And not that these things aren’t important or worth discussing, but as someone who came up in design by way of usability and information architecture, I’ve been disappointed how the community has willfully neglected the deeper concerns of systems and structure in favor of the surface. I mean, how many pixels need to be spilled on iOS 7.1’s redesigned shift key?”

    It wasn’t always this way. Indeed, when I became a designer, we had the opposite problem.

    Design Before Aesthetics

    When I started designing in the mid-’90s, we called it “user interface design,” or “human-computer interaction.” Software wasn’t a particularly hospitable place for graphic designers. The web was a limited medium, with 216 “web-safe” colors9 and a need to support 800 × 600 displays at 72 PPI. Desktop platforms were even more limited: A button looked like a button looked like a button, and few apps undertook the monumental effort of creating their own look and feel. Animation was limited to image-swap rollovers. The notion of a “front-end web developer” didn’t exist, because the platform was restrictive enough that designers with no coding experience could do it themselves.

    Practitioners of human-computer interaction tended to disregard visual design. We dealt in usability, information architecture and data visualization. Making it pretty was, if anything, a great way to obscure substance with Flash.

    And that bothered me. Aesthetics didn’t seem incompatible with usability; and given the choice between a usable ugly product and a usable attractive one, why not the latter? I also couldn’t help thinking about form and function together, each supporting and enhancing the other.

    I found support in the work of Edward Tufte10, whose books laid out principles and processes for information design. Many of these principles — like the “smallest effective difference” and the distinction between data ink and chartjunk11 — continue to guide my work today.

    By applying Tufte’s principles on the smallest effective difference and on data ink to the graph on the left, we achieve the more effective display of information on the right. It’s also far more beautiful, almost by accident. (View large version13)

    Tufte rarely talks of beauty, yet his work is infused with it. Over and over, he takes a poor design and turns it into an effective one — and in the process transforms it from garish to gorgeous. For Tufte, beauty is an integral part of presenting information well. In closing The Visual Display of Quantitative Information14, he writes, “Graphical elegance is often found in simplicity of design and complexity of data.”

    Tufte’s most famous example, Charles Minard’s 1869 graphic “Napoleon’s March to Moscow,” shows six dimensions of information with remarkable clarity. (Source: Wikipedia16) (View large version) Putting The Visual In Design

    By the mid-2000s, the industry had caught up with Tufte. Digital designers tended to fall into one of two buckets: “user experience design” (UX) and “visual design” (although titles varied). Visual design was, in a sense, applied graphic design, and it brought UX to life in pixels. It wasn’t uncommon to put two designers on a project, one of each type. The resulting work benefited not only from their breadth of skills but also from the interplay between the two.

    Why this newfound acceptance of the visual? In his 2007 Emotional Design1917, Don Norman writes:

    “In the 1980s, in writing “The Design of Everyday Things”, I didn’t take emotions into account. I addressed utility and usability, function and form, all in a logical, dispassionate way — even though I am infuriated by poorly designed objects. But now I’ve changed. Why? In part because of new scientific advances in our understanding of the brain and of how emotion and cognition are thoroughly intertwined.”

    Perception and cognition are subjective, and our emotional state influences how we think. That malleability is pervasive: The world we see is not the world as it is. (For a fascinating and detailed exploration of this, I recommend David Eagleman’s Incognito: The Secret Lives of the Brain18.) The consequences are profound: If, in crafting an app’s look and feel, one can put the user in an emotional state conducive to the task at hand, then the visual design will have been a foundational part of the UX.

    This shift in perspective paralleled technological improvement. We had millions of colors, larger displays and the horsepower to build smooth animations. CSS and the DOM gave us the flexibility to build fully styled, animated, interactive apps in the browser. Displays were still low-resolution and typography rudimentary, but it was possible to build a product that represented a distinct, polished graphical vision from top to bottom — and to apply the principles of information design with far more subtlety.

    And it kept getting better. The iPhone put dynamic, context-aware software in everyone’s pocket. Display resolution doubled, and then kept increasing. Animation got smoother, richer and much easier to build — complete with bouncing, colliding objects that mimicked real-world physics. Web fonts freed us from the shackles of Arial and Verdana, and those high-resolution displays gave legibility to font families and weights that weren’t previously practical. Today, we have unprecedented power to deliver cinematic delight.

    Taking The UX Out Of Design

    By 2011, I noticed attitudes changing again. We’d taken the “design” out of “user experience design,” and some companies had separate UX and “design” teams. Wireframes had fallen out of favor in some circles. Design debates were increasingly divorced from user needs, as Peter Merholz noted.

    Last year saw the triumph of “flat” over “skeumorphic” design, rooted in the odd Modernist-esque argument that drop shadows, textures and gloss are somehow untrue to the medium of a digital display.

    But the digital display is not, in fact, our medium. The display, the keyboard, the mouse and the touchscreen are themselves artifacts that we’ve designed in order to work with our true medium: the user. Our medium is a strange, quirky one, evolved over millennia to interact with an environment that rarely matches the one we find him in, stuck with a heuristic way of thinking that sometimes produces incredible flashes of insight and sometimes steers him in entirely the wrong direction.

    So today, perhaps Don Norman’s insight in Emotional Design1917 is needed in reverse. Beauty is wasted when our products don’t address real user needs in a usable manner. Again, perception is subjective: The product gets uglier if it fails to meet user needs or becomes confusing. It’s like falling in love at first sight, then falling back out after a brief conversation. Your crush looks less attractive now; you can’t even recall why you were so captivated in the first place.

    Great design is a synthesis of art and science, of aesthetics, usability and a deep understanding of user needs and behaviors. To succeed, we need a balance. Beauty that isn’t part of holistic, effective product design will be wasted.

    As digital designers, our tool is the pixel, but we have higher-level tools as well. We can induce an emotional state. We can use pictures and shapes to evoke familiar objects, endowing our grid of pixels with illusions of the physical world, prompting familiar interactions in an unfamiliar environment.

    Furthermore, we can delight with animation, even as we use it to preserve context and mimic physicality. We can deliberately choose typography, color and thickness to create visual hierarchy — emphasize some elements and de-emphasize others to help the user’s eye flow across the page and understand what needs to happen next. We can condense a galaxy of data points into a chart that allows instant understanding. And we can infuse all of it with a beauty built not as a complement to our efforts, but as an integral part of it.

    (cc, al, ml, il)

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10
    11. 11
    12. 12
    13. 13
    14. 14
    15. 15
    16. 16
    17. 17
    18. 18
    19. 19

    The post Think Your App Is Beautiful? Not Without User Experience Design appeared first on Smashing Magazine.

    Desktop Wallpaper Calendars: September 2014

    Sun, 08/31/2014 - 12:00

    We always try our best to challenge your artistic abilities and produce some interesting, beautiful and creative artwork. And as designers we usually turn to different sources of inspiration. As a matter of fact, we’ve discovered the best one—desktop wallpapers that are a little more distinctive than the usual crowd. This creativity mission has been going on for six years now1, and we are very thankful to all designers who have contributed and are still diligently contributing each month.

    This post features free desktop wallpapers created by artists across the globe for September 2014. Both versions with a calendar and without a calendar can be downloaded for free. It’s time to freshen up your wallpaper!

    Please note that:

    • All images can be clicked on and lead to the preview of the wallpaper,
    • You can feature your work in our magazine2 by taking part in our Desktop Wallpaper Calendars series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

    Designed by Elise Vanoorbeek3 from Belgium.



    “I thought about how sometimes we have a predisposition towards something instead of trying to understand it.” — Designed by Maria Keller46 from Mexico.


    Shades Of Summer

    “You can never have too many sunglasses” — Designed by Marina Eyl88 from Pennsylvania, USA.


    Summer Is Still Here!

    “Even if most of people is coming back to school or work after summer holidays, it’s still summer, let’s enojy it!” — Designed by Valentina101 from Italy.


    Pawns And Kings

    Designed by Clarise Frechette Design, LLC114 from Washington, DC, USA.


    Dream It

    “September for many is the month when school begins so it’s important to be positive and have an optimistic spirit about the goals one has for the upcoming year.” — Designed by Teodor Dovichinski157 from Macedonia.


    Be The Wind Of Change

    “Be the wind of change. Nature inspired us in creating this wallpaper as well as the Scorpion’s song “Wind of change” we dedicate to all creatives worldwide :)” — Designed by Design19200 from Romania.


    Elephant In The Room

    “I was inspired by Elephant Appreciation Day, which everyone knows is September 22nd.” — Designed by Rosemary Ivosevich255 from Philadelphia, PA.


    California Sundown

    “September means a lot of things. If you are into marketing, it is a big month. Not only for back-to-school sales but also for planning ahead for Christmas. Of course this brings both developers, designers and copywriters into the grind. With all this work piling up on top of us we try to keep our cool and relax whenever we can. After all, taking appropriate breaks is very important for productivity. This calming photo was snapped during my trip from San Francisco to Los Angeles back in 2013. It was a hellish week, full of meetings and travel, sometimes 28 hours at a time. But being able to stop, take a breath of fresh air and appreciate the beauty that is around us kept me and my co-founders healthy, happy and sane. I hope it will do the same for you.” — Designed by Dmitri Tcherbadji296 from Canada.


    Green Grass Of The Vosges

    “The Vosges are mountains in the east of France. In september it’s still possible to enjoy the green grass of the grazing land, more often at the top of the moutains. Summer is not closed, yet!” — Designed by Philippe Brouard343 from France.


    Autumn Ripples

    “No matter where in the world you are, this calendar will bring some autumn cheer to your device backgrounds!” — Designed by Rachel Litzinger368 from Chiang Rai, Thailand.


    Good Bye My Dear Hot Friend

    “September is a month to start over again and say good bye to the summer and vacations. That’s why we want to say good bye to the sun who will travel from north to south this time of the year.” — Designed by Colorsfera391 from Spain.


    Sugar Cube

    Designed by Luc Versleijen436 from the Netherlands.


    Join In Next Month!

    Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

    A big thank you to all designers for their participation. Join in next month461!

    What’s Your Favorite?

    What’s your favorite theme or wallpaper for this month? Please let us know in the comment section below.


    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10
    11. 11
    12. 12
    13. 13
    14. 14
    15. 15
    16. 16
    17. 17
    18. 18
    19. 19
    20. 20
    21. 21
    22. 22
    23. 23
    24. 24
    25. 25
    26. 26
    27. 27
    28. 28
    29. 29
    30. 30
    31. 31
    32. 32
    33. 33
    34. 34
    35. 35
    36. 36
    37. 37
    38. 38
    39. 39
    40. 40
    41. 41
    42. 42
    43. 43
    44. 44
    45. 45
    46. 46
    47. 47
    48. 48
    49. 49
    50. 50
    51. 51
    52. 52
    53. 53
    54. 54
    55. 55
    56. 56
    57. 57
    58. 58
    59. 59
    60. 60
    61. 61
    62. 62
    63. 63
    64. 64
    65. 65
    66. 66
    67. 67
    68. 68
    69. 69
    70. 70
    71. 71
    72. 72
    73. 73
    74. 74
    75. 75
    76. 76
    77. 77
    78. 78
    79. 79
    80. 80
    81. 81
    82. 82
    83. 83
    84. 84
    85. 85
    86. 86
    87. 87
    88. 88
    89. 89
    90. 90
    91. 91
    92. 92
    93. 93
    94. 94
    95. 95
    96. 96
    97. 97
    98. 98
    99. 99
    100. 100
    101. 101
    102. 102
    103. 103
    104. 104
    105. 105
    106. 106
    107. 107
    108. 108
    109. 109
    110. 110
    111. 111
    112. 112
    113. 113
    114. 114
    115. 115
    116. 116
    117. 117
    118. 118
    119. 119
    120. 120
    121. 121
    122. 122
    123. 123
    124. 124
    125. 125
    126. 126
    127. 127
    128. 128
    129. 129
    130. 130
    131. 131
    132. 132
    133. 133
    134. 134
    135. 135
    136. 136
    137. 137
    138. 138
    139. 139
    140. 140
    141. 141
    142. 142
    143. 143
    144. 144
    145. 145
    146. 146
    147. 147
    148. 148
    149. 149
    150. 150
    151. 151
    152. 152
    153. 153
    154. 154
    155. 155
    156. 156
    157. 157
    158. 158
    159. 159
    160. 160
    161. 161
    162. 162
    163. 163
    164. 164
    165. 165
    166. 166
    167. 167
    168. 168
    169. 169
    170. 170
    171. 171
    172. 172
    173. 173
    174. 174
    175. 175
    176. 176
    177. 177
    178. 178
    179. 179
    180. 180
    181. 181
    182. 182
    183. 183
    184. 184
    185. 185
    186. 186
    187. 187
    188. 188
    189. 189
    190. 190
    191. 191
    192. 192
    193. 193
    194. 194
    195. 195
    196. 196
    197. 197
    198. 198
    199. 199
    200. 200
    201. 201
    202. 202
    203. 203
    204. 204
    205. 205
    206. 206
    207. 207
    208. 208
    209. 209
    210. 210
    211. 211
    212. 212
    213. 213
    214. 214
    215. 215
    216. 216
    217. 217
    218. 218
    219. 219
    220. 220
    221. 221
    222. 222
    223. 223
    224. 224
    225. 225
    226. 226
    227. 227
    228. 228
    229. 229
    230. 230
    231. 231
    232. 232
    233. 233
    234. 234
    235. 235
    236. 236
    237. 237
    238. 238
    239. 239
    240. 240
    241. 241
    242. 242
    243. 243
    244. 244
    245. 245
    246. 246
    247. 247
    248. 248
    249. 249
    250. 250
    251. 251
    252. 252
    253. 253
    254. 254
    255. 255
    256. 256
    257. 257
    258. 258
    259. 259
    260. 260
    261. 261
    262. 262
    263. 263
    264. 264
    265. 265
    266. 266
    267. 267
    268. 268
    269. 269
    270. 270
    271. 271
    272. 272
    273. 273
    274. 274
    275. 275
    276. 276
    277. 277
    278. 278
    279. 279
    280. 280
    281. 281
    282. 282
    283. 283
    284. 284
    285. 285
    286. 286
    287. 287
    288. 288
    289. 289
    290. 290
    291. 291
    292. 292
    293. 293
    294. 294
    295. 295
    296. 296
    297. 297
    298. 298
    299. 299
    300. 300
    301. 301
    302. 302
    303. 303
    304. 304
    305. 305
    306. 306
    307. 307
    308. 308
    309. 309
    310. 310
    311. 311
    312. 312
    313. 313
    314. 314
    315. 315
    316. 316
    317. 317
    318. 318
    319. 319
    320. 320
    321. 321
    322. 322
    323. 323
    324. 324
    325. 325
    326. 326
    327. 327
    328. 328
    329. 329
    330. 330
    331. 331
    332. 332
    333. 333
    334. 334
    335. 335
    336. 336
    337. 337
    338. 338
    339. 339
    340. 340
    341. 341
    342. 342
    343. 343
    344. 344
    345. 345
    346. 346
    347. 347
    348. 348
    349. 349
    350. 350
    351. 351
    352. 352
    353. 353
    354. 354
    355. 355
    356. 356
    357. 357
    358. 358
    359. 359
    360. 360
    361. 361
    362. 362
    363. 363
    364. 364
    365. 365
    366. 366
    367. 367
    368. 368
    369. 369
    370. 370
    371. 371
    372. 372
    373. 373
    374. 374
    375. 375
    376. 376
    377. 377
    378. 378
    379. 379
    380. 380
    381. 381
    382. 382
    383. 383
    384. 384
    385. 385
    386. 386
    387. 387
    388. 388
    389. 389
    390. 390
    391. 391
    392. 392
    393. 393
    394. 394
    395. 395
    396. 396
    397. 397
    398. 398
    399. 399
    400. 400
    401. 401
    402. 402
    403. 403
    404. 404
    405. 405
    406. 406
    407. 407
    408. 408
    409. 409
    410. 410
    411. 411
    412. 412
    413. 413
    414. 414
    415. 415
    416. 416
    417. 417
    418. 418
    419. 419
    420. 420
    421. 421
    422. 422
    423. 423
    424. 424
    425. 425
    426. 426
    427. 427
    428. 428
    429. 429
    430. 430
    431. 431
    432. 432
    433. 433
    434. 434
    435. 435
    436. 436
    437. 437
    438. 438
    439. 439
    440. 440
    441. 441
    442. 442
    443. 443
    444. 444
    445. 445
    446. 446
    447. 447
    448. 448
    449. 449
    450. 450
    451. 451
    452. 452
    453. 453
    454. 454
    455. 455
    456. 456
    457. 457
    458. 458
    459. 459
    460. 460
    461. 461

    The post Desktop Wallpaper Calendars: September 2014 appeared first on Smashing Magazine.

    Designing Badges (And More) For A Conference

    Fri, 08/29/2014 - 13:15

    To badge or not to badge? That is the question. Because badges — and a lot of stuff designed for conferences — often look the same. But if you have a little, different conference, you need different kinds of things. Badges included.

    It all started in 2013 with the first Kerning conference1. I was asked to design the official notebook: we ended up with a really typographic design for the cover and a funny pattern on the back. And an Easter egg on the cover — more on that later. It was a really funny project, so when my dear friend Cristiano Rastelli2, a member of Kerning’s organizing committee, asked me to design the notebook and some printed materials for Kerning 2014 I immediately said “Yes, let’s start!”

    Kerning’s Calling

    I’m really passionate about letterpress, so there was no doubt about the notebook: letterpress printing again. But what about the cover? After the first really typographical design in 2013, I wanted to make some changes. I love to draw with pencil on paper and I love caricatures, too. I had drawn a lot of them over the past few years, so with all the photos of the speakers on Kerning’s website I decided to go that way: the cover would be full of caricatures.

    3Notebook cover for the first edition of Kerning. (View large version4)

    I had to decide how to draw them. When you use letterpress printing, costs can rise very quickly if you use a lot of colors. Each color is printed separately, so you have to take into account the budget from the very first second. As we had in 2013, we decided to use only the two official colors of the conference: black and red (Pantone 7417, to be precise). This was an important element because I also had to design other printed stuff too: badges for speakers, organizers, attendees and workshops. Oh, and some postcards too. This stuff had to be printed in digital offset, so we could use a lot of colors if we wanted; but since we had already set the boundaries with letterpress, both for style and colors, we decided to go the same way.

    The Idea Behind The Notebook And Badges

    Caricatures and two colors: that was OK, but how to match the design of the notebook and the badges? They had to be part of the same project and convey the same mood. The idea came from the back cover of the notebook: I had decided to use the same pattern from 2013, to establish some continuity. Looking at the pattern, I realized that it reminded me of playing cards. Not the face side obviously, but the back.

    So why not design the badges like playing cards? I had to design several different kinds of badges: those for workshops (leaders and attendees) plus those for the conference day: speakers, organizers, and attendees. And a badge for the conference host, too. With an idea for the design, I then needed to undertake some research about playing cards.

    5The pattern on the back cover reminds me of playing cards. (View large version6) The Importance Of Research

    I did some research online, reading and viewing some really interesting websites about playing cards. Some playing cards have great drawings and are really complex, too. There are a lot of different styles, but what I needed was a more simple approach, suitable for both digital offset and letterpress. I identified the cards I had to design: kings for male speakers, queens for female speakers, a joker for the host, the ace of spades for organizers, as well as five and six for attendees and workshop presenters (in fact, the conference took place on June 5–6, 2014).

    7Caricatures at work on speakers’ badges. (View large version8)

    The kings, queens and the joker would be simpler than the usual version we’re used to seeing on playing cards, but since I really love details — even the smallest ones — I found the right card for these small, crazy things: the ace of spades. Aces of spades are always full of details, beautifully designed and so it had to be the one for Kerning. But a typographical ace of spades, since the conference is about typography.

    9Organizers’ badge: typographical ace of spades. (View large version10)

    Even though my research didn’t result in a striking new idea, it gave me a foundation. Often I find that research can either confirm the idea I already have or make me change it totally. Research helps me to find a good approach. Even if I design a completely new thing, my decision is based on what I’ve just seen. It’s not just a matter of style or taste or something like that: with research and analysis I have something strong to base my project on.

    Research for illustrations of caricatures is very important, because the more pictures you collect the more details you can draw. There were some pictures of the speakers on Kerning’s website, but I looked online for more references. After having collected some more photos, I was ready to start.

    The Process

    I always start with pencil on paper. Logos, illustrations, graphic projects, it doesn’t matter — pencil and paper give me a lot of freedom. If you start drawing in Illustrator or other software, you are bound by the limits and style of that software. And when you start sketching the basic concepts, you have to be free to experiment.

    There’s no way to be influenced by some marvellous feature or effect: black on white only. Freedom is fine, but I always take into account the boundaries of the project: in this case, just two colors and a really flat mood because of letterpress. Drawing some sketches and then developing a quite finished design only using pencil allowed me to concentrate on the most important things.

    11The caricature of Ellen Lupton: pencil on paper. (View large version12)

    That said, I really love colors too, even if I’m not so good at painting with water colors or in oils. I really love flat design, so I like to draw with Illustrator. And Illustrator is the software I usually use for tracing drawings. So once my illustrations with pencil on paper are complete, I scan them to start the tracing job. I decided to draw only the caricatures by hand, making all the other graphic elements directly in Illustrator. There was an official font to use — Pluto Sans13 by HvD Fonts14 — and other things were really basic, so no need to draw them with pencil.

    Sketching is a Game of Trial and Error

    These first caricatures were really simple in style. I had tried to be as simple as possible to have something that was fine for letterpress. The eyes were just small dots and other elements were really simple. Unfortunately, I didn’t like the final result. I couldn’t — and didn’t want to — draw realistic caricatures, but what I had drawn was just too basic. I scanned the drawings and I tried to trace one or two of them, just to understand if I could make some changes directly in Illustrator: no way.

    Since tracing your drawings in Illustrator is quite a time-consuming job, I decided to start again from scratch with pencil and paper. There was not much time left before having to print everything, but I decided to start again anyway, since this was the best option to achieve exactly the final result I had in mind.

    15Caricatures, first pencil version: they were much too basic… (View large version16) 17These caricatures lacked the right level of detail, useful to play with when you are then drawing the digital version. (View large version18)

    The second attempt was better: more detailed caricatures — not naturalistic, but something that was good for both letterpress and digital offset printing. I scanned all the caricatures again, and started tracing them in Illustrator with the pen tool. As you know, I really like pencil, so I really like using a graphics tablet and pen too. In fact, I use them every time I can, because it speeds up the process. I’m so used to it that sometimes I catch myself using a tablet with a word processor. I know, I know…

    19The final, more detailed caricature of Francesco Franchi with some tests for alternative mouths. (View large version20) Tracing Sketches with Adobe Illustrator

    I usually use a very bright hue, like magenta or cyan, to be sure I can clearly see the lines against the black and white of the pencil drawings. I usually decide which weights I want to use for lines, especially if I draw illustrations with a few colors or only strokes with no fill. If I have a simple design style — a few colors, no complex shapes — I try to be as clean as I can. I think that “To complicate is easy. To simplify is difficult” by Bruno Munari is always good advice.

    21Tracing caricatures in Illustrator: same line weights for same elements (nose was 0.75 pt). (View large version22)

    For these caricatures I went with these weights: 1pt for contour and very important lines, 0.25pt for subtle details and 0.75pt for everything in between. If you decide on line weight before you start working it’s easier to maintain a consistent mood in all your drawings. Establishing line hierarchy also helps with establishing hierarchy between elements. If something is important, give it some weight. If it’s less important, make it thinner.

    23Different weights help establish a hierarchy between elements. (View large version24)

    Tracing is a matter of time and patience. Graphics tablets can help you a lot, but I usually spend a lot of time tracing my drawings with as few control points as I can. Lines are smoother and if you are working on complex illustrations, the fewer control points you have, the better the result will be. And — last but not least — fewer control points are great if you have to change a shape: think about having to change ten control points instead of four if you want to change just one hair!

    25Drawing lines with a few points is always fine: smooth results and less work if you want to change something. (View large version26) Let’s Finish the Design!

    With all the caricatures traced, it was time to add the other elements: letters (K, Q and the star for the joker), the Kerning logo, and the person’s name and Twitter handle inside a ribbon. Not such a complex job, but there were badges for attendees, presenters, and organizers, too. I had just used hearts for speakers, so I decided to go with clubs and diamonds for workshops and conference. And for the organizers, the ace of spades.

    27Meaningful names for levels are always helpful, but they are fundamental for complex illustrations. Unless you want to go crazy trying to find something between dozens of layers…(View large version28)

    Workshop and conference day cards were not so difficult: five of clubs was the badge for workshops — both for presenters and attendees — while six of diamonds was for the conference day, since they took place respectively on June 5 and June 6. These cards were very simple, so I won’t cover them here. But aces of spades are usually really complex and full of details. Because Kerning is a conference about typography, letters were the right way to add some details to the card. Since these were the organizers’ badges, I used the same ribbon with names and Twitter handles that I had already used for speakers’ badges.

    29A typographical ace of spades for the organizers’ badge. (View large version30)

    Once I had finished all the badges, I designed the postcards. These had the same layout as the badges but more space, so I decided to include a really short bio of the speakers. Last, but not least, Cristiano came up with a great idea: to design another type of postcard, displaying the official conference hashtag — #keming. We used two slab (red background), two serif (black background), and two sans serif fonts (white background); and two green cards with Comic Sans and Buttermilk (fonts respectively by Vincent Connare and Jessica Hische, both speakers at the conference), plus a calligraphic version by Luca Barcellona (workshop presenter).

    31The final digital version of Frank Chimero’s postcard and some other badges. (View large version32) Let’s Go Print!

    So, everything was fine, but I had to check some more things before printing the notebooks. If you use really soft paper with letterpress — and we used a 100% cotton paper for the cover — you have to carefully consider what happens to thin lines and smaller details when printed under high pressure.

    33Elliot Jay Stocks for letterpress (left, used for notebook) is a little bit different from Elliot Jay Stocks for digital offset (right, used for badges and postcards): different tricks for different technologies. (View large version34)

    The small dots inside the black area of the ace of spades, for instance, could disappear when printed, because of the pressure and the amount of ink all around them. So I made them a little bit bigger for the letterpress version. The same is true when thin lines are really close to one another: the risk is to lose details and have a kind of colored spot. So I simplified the hair a little bit for letterpress.

    35Kerning 2014 notebook: speakers and ace of spades on the cover. (View large version36)

    In general, when you use letterpress with cotton or soft paper and high pressure, take care of this aspect: small and fine details are OK, but lines might be slightly modified by pressure.

    Final Thoughts (That Is, Always Have Fun!)

    What more to say about this project? It started with a letterpress notebook in 2013, and developed into something more organic in 2014. It was a really great project I enjoyed a lot. And what about a complete deck of cards? I’ve been dreaming about designing a deck of cards since forever. Perhaps sooner or later I’ll design one. Who knows? With a lot of letters, obviously.

    Talking about letters, the Easter egg in the first edition of the Kerning notebook was the small caps words that created a phrase. A sentence inside a sentence. And that’s all!

    (ml, og, il, md)

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10
    11. 11
    12. 12
    13. 13
    14. 14
    15. 15
    16. 16
    17. 17
    18. 18
    19. 19
    20. 20
    21. 21
    22. 22
    23. 23
    24. 24
    25. 25
    26. 26
    27. 27
    28. 28
    29. 29
    30. 30
    31. 31
    32. 32
    33. 33
    34. 34
    35. 35
    36. 36

    The post Designing Badges (And More) For A Conference appeared first on Smashing Magazine.

    Is Your Responsive Design Working? Google Analytics Will Tell You

    Thu, 08/28/2014 - 14:03

    Responsive web design has become the dominant method of developing and designing websites. It makes it easier to think “mobile first” and to create a website that is viewable on mobile devices.

    In the early days of responsive web design, creating breakpoints in CSS for particular screen sizes was common, like 320 pixels for iPhone and 768 pixels for iPad, and then we tested and monitored those devices. As responsive design has evolved, we now more often start with the content and then set breakpoints when the content “breaks.” This means that you might end up with quite a few content-centric breakpoints and no particular devices or form factors on which to test your website.

    However, we are just guessing that our designs will perform well with different device classes and form factors and across different interaction models. We need to continually monitor a design’s performance with real traffic.

    Content-centric breakpoints are definitely the way to go, but they also mean that monitoring your website to identify when it breaks is more important. This information, when easily accessible, provides hints on what types of devices and form factors to test further.

    Google Analytics has some great multi-device features1 built in; however, with responsive design, we are really designing for form factors, not for devices. In this article, we’ll demonstrate how WURFL.js2 and Google Analytics can work together to show performance metrics across form factors. No more guessing.

    Why Form Factor?

    Speeding up and optimizing the user experience for a particular device or family of devices is always easier. In reality, though, creating a device-specific experience3 for all types of devices is not feasible, given that the diversity of web-enabled devices will just continue to grow. However, every device has a particular form factor. Luke Wroblewski4, author of Mobile First5, outlines three categories to identify device experiences6:

    • usage or posture,
    • input method,
    • output or screen.

    Because devices vary between these categories, we get different form factors. Hence, treating form factor as the primary dimension through which to monitor a responsive website makes sense. This will indicate which type of device to test for usability.

    The examples in this article all use WURFL.js, including the form factors provided by it, which are:

    • desktop,
    • app,
    • tablet,
    • smartphone,
    • feature phone,
    • smart TV,
    • robot,
    • other non-mobile,
    • other mobile.
    Feeding Data To Google Analytics

    The first step is to put WURFL.js on the pages that you want to track. Simply paste this line of code into your markup:

    <script type="text/javascript" src="//"></script>

    This will create a global WURFL object that you can access through JavaScript:


    Now that the script tag is in place, the only other thing to do is add the highlighted lines of code to Google Analytics’ tracking code:

    /* Google Analytics' standard tracking code */ _gaq.push(['_setAccount', 'UA-99999999-1']); _gaq.push(['_setDomainName',']); _gaq.push(['_trackPageview']); /* Tell Google Analytics to log WURFL.js' data */ _gaq.push(['_setCustomVar', 1,’complete_device_name’,WURFL.complete_device_name,1]); _gaq.push(['_setCustomVar', 2,'form_factor',WURFL.form_factor,1]); _gaq.push(['_setCustomVar', 3,'is_mobile',WURFL.is_mobile,1]); /* The rest of Analytics' standard tracking code */ (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + ''; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();

    Or, if you have updated to Google Analytics’ new “Universal Analytics7“, you would add this:

    /* Google Analytics' new universal tracking code */ (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)})(window,document,'script','//','ga'); ga('create', 'UA-99999999-1, 'auto'); /* Define the custom dimensions */ ga('send', 'pageview', { 'dimension1': WURFL.complete_device_name, 'dimension2': WURFL.form_factor, 'dimension3': WURFL.is_mobile });

    Further, if you are using GA Universal Analytics, you must remember to define the custom dimensions. You do that by clicking Admin → Custom Definitions → Custom Dimensions.

    8For Universal Analytics you need to define the custom dimensions in the Admin section. (Large preview9) Analyzing The Data In Google Analytics

    Now that the data is in Google Analytics, we need to make it available for inspection. We can use custom variables in Analytics in a number of ways, the most obvious being to look in the menu on the left and click Audience → Custom → Custom Variables:

    10“Custom Variables” report. (Large version11)

    If you are are using Universal Analytics, you’ll have the custom dimensions available as any other dimension in all reports in GA:

    12Accessing custom dimensions. (Large preview13)

    Already, we’re getting a pretty good picture of how form factors behave differently. The best metrics to focus on will obviously depend on your website, but in general, pay attention to bounce rate and pages per visit.

    Big Picture With Dashboard Widgets

    With dashboards14 in Google Analytics, we get a high-level overview of the most important metrics. This is a good place to monitor how your website performs across form factors. Once again, bounce rate and page impressions per visit are good metrics to start with. The purpose of the dashboard widgets is to alert you and to visualize how your website’s performance changes for certain form factors.

    Let’s create a few widgets to display the status of different form factors. First, create a pie-chart widget that shows how much your website is being used by different form factors.

    15Widget displaying form factors. (Large version16)

    In the Dashboard, click Add Widget, select Pie, then the Sessions metric, and group it by the form factor custom variable. Note that the label in the green drop-down list is Custom Variables, not the actual name. In our example, the form factor variable is in the second slot, but make sure to choose the right slot if you’ve implemented it in a different order. Again, if you have converted to Universal analytics, the procedure is similar, but in stead of selecting custom variables, you simply add the name of your custom dimension as you would with any other dimension.

    Next, create a few widgets to display visits and bounce rates17 per form factor. The widgets will indicate whether changes to the website have had a positive or negative impact. Obviously, you want higher visits and a lower bounce rate.

    18Creating a “form factor” widget. (Larger version19)

    Create this widget by adding a filter to the standard metrics. Choose a timeline diagram and filter the data with your custom variable where you have stored the form factor. Create one widget for each of the form factors that you want to monitor:

    20“Form factor” widgets in the dashboard. (Large version21)

    You might find that some form factors disappear in the statistics for global bounce rates because the data set is now bigger (as in the example above). As indicated by the red arrows, something dramatic has happened with smartphones and feature phones. Specifically, some changes were made to the landing page to increase traffic from tablets, and the changes clearly had a negative impact on traffic from smartphones and feature phones. Identifying the reason for the drop in traffic requires more fine-grained Analytics reports, and the drop might not have been easy to spot without having monitored form factors.

    Form Factor Segments

    Any custom variable that you put into Google Analytics is, of course, available in most reports as filters or dimensions, so tweaking them to your needs is quite easy. Another way to keep form factors at the top of mind is to put them in segments22 by creating conditions. Here is one segment per form factor that you’ll want to track:

    23Configure a segment. If you’re using Universal Analytics, you must use your custom dimensions rather than the custom variables. (Large version24)

    The same, but in Universal Analytics:

    25(Large preview26)

    Google Analytics will show these segments in most of its standard reports as separate dimensions in charts and tables:

    27Segments chart. (Large version28)

    You can make “form factor” a dimension in most reports. As mentioned, bounce rate and general engagement are key metrics to follow, but goals and conversion rate are obviously interesting, too. You might find the need to create new goals or at least review your funnel for certain form factors.

    After monitoring form factors for a while, you might conclude that you need to offer different user experiences for one or more form factors. Furthermore, you might need to tweak goals, funnels and advertising campaigns to account for differences in usage per form factor or device type.

    We have used Google Analytics here, but WURFL.js is, of course, compatible with other analytics tools, as long as custom variables like the ones above are allowed.


    In this article, we have looked at how performance per form factor is a key metric for monitoring a website and how WURFL.js and Google Analytics help to visualize this data. Once you put WURFL.js’ data into Analytics, it will be available in most standard reports as filters or dimensions, so tweaking the reports to your needs is quite straightforward. And the dashboard widgets will give you a high-level overview of their status. Also, bounce rate and page impressions per visit are key metrics, at least to start; so, defining form factors as segments will give you nice visualizations in most standard reports.

    As a next step, look into conversions and goals in Google Analytics to see how to integrate and monitor form factors, which will vary according to the website’s function and purpose. To give you a head start, we have made a template that you can install29 in your Google Analytics dashboard (This template uses custom variables, not custom dimensions). Just follow the instructions to assign an Analytics property, which will then appear under Dashboards → Private.

    (al, ml, il)

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10
    11. 11
    12. 12
    13. 13
    14. 14
    15. 15
    16. 16
    17. 17
    18. 18
    19. 19
    20. 20
    21. 21
    22. 22
    23. 23
    24. 24
    25. 25
    26. 26
    27. 27
    28. 28
    29. 29

    The post Is Your Responsive Design Working? Google Analytics Will Tell You appeared first on Smashing Magazine.

    Customizing WordPress Archives For Categories, Tags And Other Taxonomies

    Wed, 08/27/2014 - 16:47

    Most WordPress users are familiar with tags and categories and with how to use them to organize their blog posts. If you use custom post types in WordPress, you might need to organize them like categories and tags. Categories and tags are examples of taxonomies, and WordPress allows you to create as many custom taxonomies as you want. These custom taxonomies operate like categories or tags, but are separate.

    In this tutorial, we’ll explain custom taxonomies and how to create them. We’ll also go over which template files in a WordPress theme control the archives of built-in and custom taxonomies, and some advanced techniques for customizing the behavior of taxonomy archives.


    Before continuing, let’s get our terminology straight. A taxonomy is a WordPress content type, used primarily to organize content of any other content type. The two taxonomies everyone is familiar with are built in: categories and tags. We tend to call an individual posting of a tag a “tag,” but to be precise, we should refer to it as a “term” in the “tag” taxonomy. We pretty much always refer to items in a custom taxonomy as “terms.”

    Categories and tags represent the two types of taxonomies: hierarchical and non-hierarchical. Like categories, hierarchical taxonomies can have parent-child relationships between terms in the taxonomy. For example, you might have on your blog a “films” category that has several child categories, with names like “foreign” and “domestic.” Custom taxonomies may also be hierarchical, like categories, or non-hierarchical, like tags.

    A small part of the “WordPress Template Hierarchy”. (Source1)

    The archive of a taxonomy is the list of posts in a taxonomy that is automatically generated by WordPress. For example, this would be the page you see when you click on a category link and see all posts in that category. We’ll go over how to change the behavior of these pages and learn which template files generate them.

    How Tag, Category and Custom Taxonomy Archives Work

    For every category, tag and custom taxonomy, WordPress automatically generates an archive that lists each post associated with that taxonomy, in reverse chronological order. The system works really well if you organize your blog posts with categories and tags. If you have a complex system of organizing custom post types with custom taxonomies, then it might not be ideal. We’ll go over the many ways to modify these archives.

    The first step to customizing is to know which files in your theme are used to display the archive. Different themes have different template files, but all themes have an index.php template. The index.php template is used to display all content, unless a template exists higher up in the hierarchy. WordPress’ template hierarchy is the system that dictates which template file is used to display which content. We’ll briefly go over the template hierarchy for categories, tags and custom taxonomies. If you’d like to learn more, these resources are highly recommended:

    Most themes have an archive.php template, which is used for category and tag archives, as well as date and author archives. You can add a template file to handle category and tag archives separately. These templates would be named category.php or tag.php, respectively. You could also create templates for specific tags or categories, using the ID or slug of the category or tag. For example, a tag with the ID of 7 would use tag-7.php, if it exists, rather than tag.php or archive.php. A tag with the slug of “avocado” would be displayed using the tag-avocado.php template.

    One tricky thing to keep in mind is that a template named after a slug will override a template named after an ID number. So, if a tag with the slug of “avocado” had an ID of 7, then tag-avocado.php would override tag-7.php, if it exists.

    The template hierarchy for custom taxonomies is a little different, because there are templates for all taxonomies, for specific taxonomies and for specific terms in a specific taxonomy. So, imagine that you have two taxonomies, “fruits” and “vegetables,” and the “fruits” taxonomy has two terms, “apples” and “oranges,” while “vegetables” has two terms, “carrots” and “celery.” Let’s add three templates to our website’s theme: taxonomy.php, taxonomy-fruits.php and taxonomy-vegetables-carrots.php.

    For the terms in the “fruits” taxonomy, all archives would be generated using taxonomy-fruits.php because no term-specific template exists. On the other hand, the term “carrots” in the “vegetables” taxonomy’s archives would be generated using taxonomy-vegetables-carrots.php. Because no taxonomy-vegetables.php template exists, all other terms in “vegetables” would be generated using taxonomy.php.

    Using Conditional Tags

    While you can add any of the custom templates listed above to create a totally unique view for any category, tag, custom taxonomy or custom taxonomy term, sometimes all you want to do is make one or two little changes. In fact, try to avoid creating a lot of templates because you will need to adjust each one when you make overall changes to the basic HTML markup that you use in each template in the theme. Unless I need a template that is radically different from the theme’s archive.php, I tend to stick to adding conditional changes to archive.php.

    WordPress provides conditional functions to determine whether a category, tag or custom taxonomy is being displayed. To determine whether a category archive is being shown, you can use is_category() for categories, is_tag() for tags and is_tax() for custom taxonomies. The is_tag() and is_category() functions can also test for specific categories or tags by slug or ID. For example:

    <?php if ( is_tag() ) { echo "True for any tag!"; } if ( is_tag( 'jedis' ) ) { echo "True for the tag whose slug is jedi"; } if ( is_tag( array( 'jedi', 'sith' ) ) ) { echo "True for tags whose slug is jedi or sith"; } if ( is_tag( 7 ) ) { echo "You can also use tag IDs. This is true for tag ID 7"; } ?>

    For custom taxonomies, the is_tax() function can be used to check whether any taxonomy (not including categories and tags), a specific taxonomy or a specific term in a taxonomy is being shown. For example:

    <?php if ( is_tax() ) { echo "True for any custom taxonomy."; } if ( is_tax( 'vegetable' ) ) { echo "True for any term in the vegetable taxonomy."; } if ( is_tax( 'vegetable', 'celery' ) ) { echo "True only for the term celery, in the vegetable taxonomy."; } ?> Creating Custom Taxonomies

    Adding a custom taxonomy can be done in one of three ways: coding it manually according to the instructions in the Codex, which I don’t recommend; generating the code using GenerateWP6; or using a plugin for custom content types, such as Pods7 or Types8. Plugins for custom content types enable you to create custom taxonomies and custom post types in WordPress’ back end without having to write any code. Using one is the easiest way to add a custom taxonomy and to get a framework for working with custom content types.

    If you opt for one of the first two options, rather than a plugin, then you will need to add the code either to your theme’s functions.php file or to a custom plugin. I strongly recommend creating a custom plugin, rather than adding the code to functions.php. Even if you’ve never created a plugin before, I urge you to do it. While adding the code to your theme’s functions.php will work, when you switch themes (say, because you want to use a new theme or to troubleshoot a problem), the taxonomy will no longer work.

    Whether you write your custom taxonomy code by following the directions in the Codex or by generating it with GenerateWP, just paste it in a text file and add one line of code before it and you’ll have a plugin. Upload it and install it as you would any other plugin.

    The only line you need to create a custom plugin is /* Plugin name: Custom Taxonomy */.

    Below is a plugin to register a custom taxonomy named “vegetables,” which I created using GenerateWP because it’s significantly easier and way less likely to contain errors than doing it manually:

    <?php /* Plugin Name: Veggie Taxonomy */ if ( ! function_exists( 'slug_veggies_tax' ) ) { // Register Custom Taxonomy function slug_veggies_tax() { $labels = array( 'name' => _x( 'Vegetables', 'Taxonomy General Name', 'text_domain' ), 'singular_name' => _x( 'Vegetable', 'Taxonomy Singular Name', 'text_domain' ), 'menu_name' => __( 'Taxonomy', 'text_domain' ), 'all_Veggies' => __( 'All Veggies', 'text_domain' ), 'parent_Veggie' => __( 'Parent Veggie', 'text_domain' ), 'parent_Veggie_colon' => __( 'Parent Veggie:', 'text_domain' ), 'new_Veggie_name' => __( 'New Veggie name', 'text_domain' ), 'add_new_Veggie' => __( 'Add new Veggie', 'text_domain' ), 'edit_Veggie' => __( 'Edit Veggie', 'text_domain' ), 'update_Veggie' => __( 'Update Veggie', 'text_domain' ), 'separate_Veggies_with_commas' => __( 'Separate Veggies with commas', 'text_domain' ), 'search_Veggies' => __( 'Search Veggies', 'text_domain' ), 'add_or_remove_Veggies' => __( 'Add or remove Veggies', 'text_domain' ), 'choose_from_most_used' => __( 'Choose from the most used Veggies', 'text_domain' ), 'not_found' => __( 'Not Found', 'text_domain' ), ); $args = array( 'labels' => $labels, 'hierarchical' => false, 'public' => true, 'show_ui' => true, 'show_admin_column' => true, 'show_in_nav_menus' => true, 'show_tagcloud' => false, ); register_taxonomy( 'vegetable', array( 'post' ), $args ); } // Hook into the 'init' action add_action( 'init', 'slug_veggies_tax', 0 ); } ?>

    By the way, I created this code using GenerateWP in less than two minutes! The service is great, and manually writing code that this website can automatically generate for you makes no sense. To make the process even easier, you can use the plugin Pluginception9 to create a blank plugin for you and then paste the code from GenerateWP into it using WordPress’ plugin editor.

    Using WP_Query With Custom Taxonomies

    Once you have added a custom taxonomy, you might want to query for posts with terms in that taxonomy. To do this, we can use taxonomy queries with WP_QUERY.

    Taxonomy queries can be very simple or complicated. The simplest query would be for all posts with a certain term. For example, if you had a post type named “jedi” and an associated custom taxonomy named “level,” then you could get all Jedi masters like this:

    <?php $args = array( 'post_type' => 'jedi', 'level' => 'master' ); $query = new WP_Query( $args ); ?>

    If you added a second custom taxonomy named “era,” then you could find all Jedi masters of the Old Republic like this:

    <?php $args = array( 'post_type' => 'jedi', 'level' => 'master', 'era' => 'old-republic', ); $query = new WP_Query( $args ); ?>

    We can also do more complicated comparisons, using a full tax_query. The tax_query argument enables us to search by ID instead of slug (as we did before) and to search for more than one term. It also enables us to combine multiple taxonomy queries and to set the relationship between the two. In addition, we can even use SQL operators such as NOT IN to exclude terms.

    The possibilities are endless. Explore the “Taxonomy Parameters10” section of the Codex page for “Class Reference/WP_Query” for complete information. The snippet below searches our “jedi” post type for Jedi knights and masters who are not from the Old Republic era:

    <?php $args = array( 'post_type' => 'jedi', 'tax_query' => array( 'relation' => 'AND', array( 'taxonomy' => 'level', 'field' => 'slug', 'terms' => array( 'master', 'knight' ) ), array( 'taxonomy' => 'era', 'field' => 'slug', 'terms' => array( 'old-republic' ), 'operator' => 'NOT IN' ) ) ); $query = new WP_Query( $args ); ?> Customizing Taxonomy Archives

    So far, we have covered how taxonomies, tags and categories work by default, as well as how to create custom taxonomies. If any of this default behavior doesn’t fit your needs, you can always modify it. We’ll go over some ways to modify WordPress’ built-in functionality for those of you who use WordPress less as a blogging platform and more as a content management system, which often requires custom taxonomies.

    Hello pre_get_posts

    Before any posts are outputted by the WordPress loop, WordPress automatically retrieves the posts for the user according to the page they are on, using the WP_QUERY class. For example, in the main blog index, it gets the most recent posts. In a taxonomy archive, it gets the most recent posts in that taxonomy.

    To change that query, you can use the pre_get_posts filter before WordPress gets any posts. This filter exposes the query object after it is set but before it is used to actually get any posts. This means that you can modify the query using the class methods before the main WordPress loop is run. If that sounds confusing, don’t worry — the next few sections of this article give practical examples of how this works.

    Adding Custom Post Types to Category or Tag Archives

    A great use of modifying the WP_QUERY object using pre_get_posts is to add posts from a custom post type to the category archive. By default, custom post types are not included in this query. If we were constructing arguments to be passed to WP_Query and wanted to include both regular posts and posts in the custom post type “jedi,” then our argument would look like this:

    <?php $args = array( 'post_type' => array( 'post', 'jedi' ) ); ?>

    In the callback for our pre_get_posts filter, we need to pass a similar argument. The problem is that the WP_QUERY object already exists, so we can’t pass an argument to it like we do when creating an instance of the class. Instead, we use the set() class method, which allows us to change any of the arguments after the class has been created.

    In the snippet below, we use set() to change the post_type argument from the default value, which is post, to an array of post types, including posts and our custom post type “jedi.” Note that we are using the conditional tag is_category() so that the change happens only when category archives are being displayed.

    <?php add_filter( 'pre_get_posts', 'slug_cpt_category_archives' ); function slug_cpt_category_archives( $query ) { if ( $query->is_category() && $query->is_main_query() ) { $query->set( 'post_type', array( 'post', 'jedi' ) ); } return $query; } ?>

    This function’s $query parameter is the WP_QUERY object before it is used to populate the main loop. Because a page may include multiple loops, such as those used by widgets, we use the conditional function is_main_query() to ensure that this affects only the main loop and not any secondary loops on the page, such as those used by widgets.

    Making Category or Hierarchical Taxonomy Archives Hierarchical

    By default, the archives for categories and other hierarchical taxonomies act like any other taxonomy archive: they show all posts in that category or with that taxonomy term. To show only parent terms and exclude child terms, you would use the pre_get_posts filter again.

    Just like when creating your own WP_QUERY for posts in a taxonomy, the main loop’s WP_QUERY uses the tax_query arguments to get posts by taxonomy. The tax_query has an include_children argument, which by default is set to 1 or true. By changing it to 0 or false, we can prevent posts with a child term from being included in the archive:

    <?php add_action( 'pre_get_posts', 'slug_cpt_category_archives' ); function slug_cpt_category_archives( $query ) { if ( is_tax( 'TAXONOMY NAME') ) { $tax_query = $query->tax_query->queries; $tax_query['include_children'] = 0; $query->set( 'tax_query', $tax_query ); } } ?>

    The result sounds desirable but has several major shortcomings. That’s OK, because if we address those flaws, we’ll have taken the first step to creating something very cool.

    The first and biggest problem is that the result is not an archive page that shows the child terms; it’s still a post with the parent term. The other problem is that we don’t have a good way to navigate to the child term archives.

    A good way to deal with this is to combine the pre_get_post filter above with a modification to the template that shows the category or taxonomy. We discussed earlier how to determine which template is used to output category or custom taxonomy archives. Also, keep in mind that you can always wrap your changes in conditional tags, such as is_category() or is_tax(), but that can become unwieldy quickly; so, making a copy of your archive.php and removing any unneeded code probably makes more sense.

    The first step is to wrap the entire thing in a check to see whether the current taxonomy term has children. If it does not, then we do not want to output anything. To do this, we use get_term_children(), which will return an empty array if the current term has no children and which we can test for with !empty().

    To make this work for any taxonomy that might be displayed, we need to get the current taxonomy and taxonomy term from the query_vars array of the global $wp_query object. The taxonomy’s slug is contained in the taxonomy key, and the term’s slug is in the tax key.

    To use get_term_children(), we must have the term’s ID. The ID is not in query_vars, but we can pass the slug to get_term_by() to get it.

    Here is how we get all of the information that we need into variables:

    <?php global $wp_query; $taxonomy = $wp_query->query_vars['taxonomy']; $term = $wp_query->query_vars['tax']; $term_id = get_term_by( 'slug', $term, $taxonomy ); $term_id = $term_id->term_id; $terms = get_term_children( $term_id, $taxonomy ); ?>

    Now we will continue only if $terms isn’t an empty array. To see whether it is empty in our check, first we will repopulate the terms using get_terms(). This is necessary because get_term_children returns only an array of IDs, and we need IDs and names, both of which are in the object returned by get_terms(). We can loop through this object, outputting the name as a link. The link can be generated by passing the term’s ID to get_term_link().

    Here is the complete code:

    <?php global $wp_query; $taxonomy = $wp_query->query_vars['taxonomy']; $term = $wp_query->query_vars['tax']; $term_id = get_term_by( 'slug', $term, $taxonomy ); $term_id = $term_id->term_id; $terms = get_term_children( $term_id, $taxonomy ); if ( !empty( $terms ) ) { $terms = get_terms( $taxonomy, array( 'child_of' => $term_id ) ); echo '<ul class="child-term-list">'; foreach ( $terms as $term ) { echo '<li><a href="'.$term->term_id.'">'.$term->name.'</a></li>'; } echo '</ul>'; ?> Creating A Custom Landing Page For Taxonomy Archives

    If your hierarchical taxonomy has no terms in the parent term, then the regular taxonomy archive system will be of no use to you. You really want to show taxonomy links instead.

    In this case, a good option is to create a custom landing page for the term. We’ll use query_vars again to determine whether the user is on the first page of a taxonomy archive; if so, we will use the taxonomy_archive filter to include a separate template, like this:

    <?php add_filter( 'taxonomy_archive ', 'slug_tax_page_one' ); function slug_tax_page_one( $template ) { if ( is_tax( 'TAXONOMY_NAME' ) ) { global $wp_query; $page = $wp_query->query_vars['paged']; if ( $page = 0 ) { $template = get_stylesheet_directory(). '/taxonomy-page-one.php'; } } return $template; } ?>

    This callback first checks that the user is in the taxonomy that we want to target. We can target all taxonomies by changing this to just is_tax(). Then, it gets the current page using the query_var named paged, and if the user is on the first page, then it returns the address for the new template file. If not, it returns the default template file.

    What you put in that template file is up to you. You can create a list of terms using the code shown above. You can use it to output any content, really — for example, more information about the taxonomy term or links to specific posts.

    Taking Control

    With a bit of work, WordPress’ basic architecture, which still reflects its origins as a blogging platform, can be customized to fit almost any website or Web app. Using custom taxonomies to organize your content and doing it in a way that suits your needs will be an important step in many of your WordPress projects. Hopefully, this post has brought you a step closer to getting the most out of this powerful aspect of WordPress.

    (dp, al, il)

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10

    The post Customizing WordPress Archives For Categories, Tags And Other Taxonomies appeared first on Smashing Magazine.

    A Quick Tour Of WordPress 4.0

    Wed, 08/27/2014 - 12:32

    Today, WordPress has released the first release candidate1 (RC) for the upcoming 4.0 version. According to the official version numbering2, WordPress 4.0 is no more or less significant than 3.9 was or 4.1 will be. That being said, a new major release is always a cause for excitement! Let’s take a look at the new features the team at WordPress has been working on for us.

    Installation Language

    Since I’ve always used WordPress in English, it took me a while to realize how important internationalization is. 29% of all installations3 use a non-English language which is huge and not that far from more than a quarter of all installations. Version 4.0 makes it much easier to get WordPress to speak your language. In fact, the first installation screen asks you to choose your native tongue. Nice!

    This is a big step up from either having to download in your own language, or grabbing language files manually and modifying the config.

    Embed Previews For URLs

    Embedding content into posts has also become a much nicer process. One of my irks with the visual editor used to be that it wasn’t visual enough. Not that long ago, you just got a grey box in place of a gallery or other media/embed items. The “Smith” release took care of galleries and 4.0 is taking care of a host of other items. If you paste a YouTube URL in text mode, it will render as a video in visual mode. How handy is that?

    I find this a lot more pleasing to work with — I see exactly what I’m going to get. The media modal’s insert from the URL feature is getting the same upgrade. As soon as you’ve entered a URL, the video will load — playable and all! The good news is that it works with all the services you’d expect, from Vimeo to Twitter, Hulu and Flickr. Scott Taylor (who is a core contributor working on this) has kindly gathered some test URLs. I recommend checking out Trac ticket4 to find out more in this regard.

    Media Section Grid

    The media section now has a grid view by default. This isn’t a groundbreaking coding feat by any measure, but it does introduce a sleeker UI which is perhaps a glimpse of what is coming up in the future.

    While this is a minor change, it does give you a way better overview of your media files than the default view of 20 images in a list.

    Plugin Discovery And Installation

    In my opinion, the plugin “Add New” page got a much needed makeover. The top navigation looks a lot like the new navigation in the media section — another indication of a slightly more modern interface creeping into the system. Plugins in the list view are displayed in a much more visual fashion, and it looks like it’s time for developers to start making thumbnails! While the plugin details screen could use a makeover as well, I’m sure this is a work in progress and will be explored further.

    Better Post Editing

    One feature I’m particularly happy with is how the editor height has been changed to use screen real estate better. Mark Jaquith painted a great picture5 of the problem:

    “The post editor feels like it has been relegated to a box of medium importance on the edit/compose screen.”

    This one is a bit difficult to capture in a screenshot, so here’s a quick video of it in action:

    UI Improvements For Widget Customization

    It’s great that widgets have been included in the theme customizer. Usually, if you had more than five to six widgets, things became a bit too crowded. Fortunately, the new WP version has now put all widgets into a sub-section of the customization screen. This essentially minimizes them when not needed — a welcome UI improvement for sure!
    Join The Fun

    As always, the latest development versions can be tried out pretty easily. By installing the WordPress Beta Tester6 plugin you can update to the latest beta builds or nightlies and play around with the brand new features.

    If you happen to find any bugs, you can add them to the WordPress Trac7 and you can even fix them and contribute to the core! WordPress is a community project, and every little bit helps!


    While I agree that WordPress 4.0 isn’t the same leap as 3.0 was, I disagree with this being a problem. Instead of adding more and more features, the team is consolidating existing features and working hard to bring us a better user experience.

    The features above will actually affect how I use the day-to-day features of WordPress in a positive way. While I may not be able to post from my Google Glass (partly because I don’t have one), the ability to use WordPress better is far more important.


    iframe[width="500"] { width:100%;height:280px} Footnotes
    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7

    The post A Quick Tour Of WordPress 4.0 appeared first on Smashing Magazine.

    Dropbox’s Carousel Design Deconstructed (Part 1)

    Tue, 08/26/2014 - 09:00

    Many of today’s hottest technology companies, both large and small, are increasingly using the concept of the minimum viable product (MVP) as way to iteratively learn about their customers and develop their product ideas.

    By focusing on an integral set of core functionality and corresponding features for product development, these companies can efficiently launch and build on new products. While the concepts are relatively easy to grasp, the many trade-offs considered and decisions made in execution are seldom easy and are often highly debated.

    This two-part series, looks into the product design process of Dropbox’s Carousel and the product team at UXPin shares our way of thinking about product design, whether you’re in a meeting, whiteboarding, sketching, writing down requirements, or wireframing and prototyping.

    Part 1 is about the core user, their needs and Dropbox’s business needs, and it breaks down existing photo and video apps. Part 2 will cover Carousel’s primary requirements, the end product, its performance and key learnings since the launch.

    The Carousel MVP

    It’s been reported1, that Dropbox wants Carousel, its new mobile photo and video gallery app, to be “the go-to place for people to store and access their digital photos [and videos],” to be the “one place for all your memories.” In effect, Carousel allows you to access all of your photos and videos stored in a Dropbox account on any device, unifying them in a single interface that automatically sorts files by time and location.

    More specifically, the app launched with several key features:

    • Backing up
      It integrates directly with Dropbox’s file storage to save all photos and videos taken on your mobile phone.
    • Viewing
      A cloud-based media gallery displays all of your photos and videos without taking up local storage on your phone.
    • Sharing
      It offers many ways for you to share photos and videos with others, primarily by sending links to view them in Carousel.
    • Discussing
      A new chat thread is created for every group of people with whom you share a collection of pictures or videos.
    2Carousel screenshots of backup, viewing, sharing & discussing. (View large version3)

    Since launching, Carousel has received polarizing reviews. Amidst this uproar of praise and feature requests, we’ll go over how any product or design team could arrive at the same initial release — a critical exercise, especially in a market as crowded as the one for photo apps. First, we’ll summarize what Carousel is, then break down part of the design process for this MVP, and then compare the UI and UX to existing design patterns such as Apple’s Photos, Instagram, Google+, Camera+, Flickr, Facebook, Picturelife and Dropbox Photos itself.

    You can be sure that the product team’s meetings sounded a little like this:

    “Photos are hot.”

    “People store a lot of photos on Dropbox.”

    “So, let’s build a mobile gallery.”

    “Want to copy Apple’s Photos but integrate it with Dropbox?”

    “Sure, but we also need to copy Facebook Messenger because we’re social, too.”

    “OK, draw some sketches, make some wireframes, create the final mockups and build a high-fidelity prototype.”

    “Ready to ship to 275 million users?”


    Now that we have an idea of what Carousel is, let’s consider how the team might have gone about designing the app.

    Core Users

    Carousel clearly targets consumers, both young and old, rather than professionals or enterprises. No question about that. This is clear from the interface and the visual design choices4, marked by a youthful montage of two personas, Nora and Owen (yes, they have names), who we see growing up.

    5Carousel’s core users: Nora and Owen, your everyday consumer. (View large version6) Users needs

    A lot of decisions need to be made here because the market already has literally hundreds of apps for taking and managing photos. A few of the main use cases for these photo apps are:

    • taking photos (i.e. with the camera);
    • editing photos (with filters or advanced editing);
    • backing up and syncing across devices;
    • viewing;
    • managing (tagging, arranging, moving, deleting or hiding);
    • sharing (privately or publicly);
    • discussing (privately or publicly).

    Clearly, a lot of user needs could be met by the first version of the product. But where to start? Francine Lee, now a UX researcher at Dropbox, took an initial stab at answering this question with a guerilla usability test7 on Dropbox’s existing solution, Dropbox photos. I’m going to take her work a few steps further.

    Business Goals

    In general, Dropbox cares about the growth and monetization of its core business. This is what most companies, hot or not, care about.

    Specifically, the company wants to continue growing its overall user base (whether those users come from the main Dropbox app or from Carousel), driving new users to its main service of backing up and syncing files, upselling them on larger plans, and keeping everyone as engaged and happy as possible.

    Carousel was built to help Dropbox grow.

    So, how does this overlap with users’ needs and, ultimately, with the solution that needs to be designed and built?

    Existing Design Patterns And Their Gaps

    Let’s look at relevant products and design patterns that satisfy both user and business needs, as well as identify any gaps that Carousel might fill. If you’re interested in learning more, check out UXPin’s ebook Mobile UI Design Patterns8.

    We’ll review a few existing mobile photo galleries and other design patterns to understand why Dropbox’s Carousel looks so similar to Apple’s native Photos app as well as Instagram’s direct-messaging feature, which, not coincidentally, is similar to Facebook’s Messenger app. Beyond the fact that Apple has made it nearly impossible9 for third-party developers to build a better app, we believe that Dropbox has taken this path for many reasons. And we have much inspiration to take from Instagram.

    Because Carousel targets the average consumer, we’ll also look at media-gallery applications that target this user base with a strong mobile presence — after all, eyeballs and engagement are going in the direction of mobile. As such, we didn’t look as much into desktop and web-first apps such as iPhoto, Picasa and Unbound or into power-user applications.

    Instead, we’ll focus on Apple’s Photos, Instagram, Google+, Camera+, Flickr, Facebook, Picturelife and Dropbox Photos itself. In “The Best Photo Apps for Keeping Your Memories in the Cloud10,” The Verge analyzes existing solutions in depth and validates our focus on these types of products in evaluating Carousel’s MVP.

    11A comparison of photo apps. (Image credit: The Verge12) (View large version13)

    Given that Loom, a popular photo and video gallery app, was acquired by Dropbox14 within a week after Carousel launched and then decommissioned a month later in May 2014, we did not include it in this discussion. Everpix also recently went out of business15, so we cannot mention much about it either. To give you an idea of how competitive this space is, Everpix was giving away a free two-year trial just for downloading the desktop app, uploading some photos and linking it to a smartphone.

    A demo video of the leading photo application. (Image credit: Loom.com16) Taking Photos

    Below are screenshots of Instagram, Apple’s Camera and Flickr.

    They all provide roughly the same functionality (including filters), and all allow users to save copies of photos to their phone’s camera roll, which Dropbox already seamlessly backs up and syncs to the cloud when users opt in. Users don’t need any more options for taking photos, so building this into Carousel’s initial functionality wouldn’t make sense for Dropbox.

    17Interface for taking photos in Instagram, Apple Camera, and Flickr Mobile Apps, respectively. (View large version18)

    Not only does a camera not belong in Dropbox’s core user experience today, but it wouldn’t complement the myriad of other digital cameras out there. By not building camera functionality into Carousel, Dropbox both minimizes development risks and plays nice with the majority of the market for capturing photos and videos, both apps and hardware alike. It just wants the picture once you’ve taken it.

    Editing: Filters and Advanced Editing

    Below are screenshots of Apple’s Photos, Instagram and Camera+.

    As you can see, you’ll also have to consider a myriad of photo- and video-editing options. Enough reasonable solutions seem to be on the market. Again, most of these products allow users to save original and edited copies to their phone’s camera roll, which Dropbox already seamlessly backs up and syncs to the cloud when users opt in.

    19Filters and advanced editing in Apple Photos, Instagram, and Camera+ Mobile Apps, respectively. (View large version20)

    Because capturing and editing photos are usually a part of the same workflow, Dropbox has the same reason for not providing this in the initial version of Carousel: It just wants the picture once you’ve taken it. In addition to this reason, users also theoretically cannot edit photos until they store them on Dropbox. Because one of Dropbox’s primary objectives with Carousel is to increase the number of photos that new and existing users store on Dropbox, what users do thereafter is less important and is potentially a distraction from saving all of their photos and moving on.

    Backing Up and Syncing

    Below are screenshots of Google+, Apple’s Photos, Facebook, Dropbox and Carousel.

    Unfortunately, most camera and photo-editing apps still require users to save photos to the camera roll before backing up and syncing. This multi-step process of safely backing up photos and videos and then clearing the camera roll to save space is not only time-consuming when done at the last minute, but also stressful because there is always the worry that something hasn’t synced properly. Beyond the potential for improvement in tying together the processes of capturing and backing up media, current cloud solutions have some additional design problems.

    On iOS, separating the option to back up the camera roll from the option to upload new photos to the photo stream across all devices could be confusing. In fact, I still barely understand the difference. Google+ also confuses this experience because users might presume they can edit these settings in Google Drive, which they can’t. Google essentially forces users to share images on its publicly skewed social media website, Google+, with no clarity on the privacy settings for this content. While Google+ does offer auto-enhance and “Auto Awesome” — whatever that means — users might go over their data limits or their phone’s battery might die from uploading so many videos or photos over cellular data.

    Facebook, on the other hand, has learned its lesson here and clearly makes media syncing private until the user does something. It also provides some granularity in the settings so that users can sync in the background with peace of mind. And users have a clear option to use Facebook for cloud storage, like Dropbox — obviously, Dropbox is interested in enabling this by default because this is its core business and product value, unlike Facebook.

    21Settings in Google+, Apple Photos, and Facebook Mobile Apps, respectively. (View large version22)

    Dropbox takes care of these use cases elegantly and, as we’ll see, has completely migrated these settings for photos over to Carousel so that users can get to the right place even if they try to edit these settings in Dropbox’s main app.

    23Settings in Dropbox and Carousel Mobile Apps, respectively. (View large version24) Viewing

    Below are screenshots of Apple’s Photos, Facebook, Instagram and Picturelife.

    At a basic level, these apps all present photos and videos according to the time and location in which they were shot (sometimes even the building) and in groups and in enlarged individual views. However, this can get confusing when users toggle between views, especially in Apple’s Photos, which has albums, collections and moments, with little or no visual cue of how they relate to each other or what they even mean in the first place — I, for one, still have no idea. This becomes increasingly problematic when users delete photos from their camera roll periodically to save storage space, because there isn’t an easy way to view backed-up media in iCloud.

    25Viewing photos with the Apple Photos Mobile App. (View large version26)

    Facebook is a much simpler solution but, like many cloud-based galleries, has issues with loading speed when the user scrolls quickly because it’s not a native app. Also, accessing these photos is not as simple as it should be — photos are still a secondary experience. On other other hand, Instagram is a photo-first app, but the viewing functionality is limited and extremely cluttered by supporting data (likes, comments, timestamps, etc.).

    27Viewing photos with Facebook’s Mobile App. (View large version28) 29Viewing photos with Instagram’s Mobile App. (View large version30)

    Compared to the alternatives, Picturelife stands apart with its sheer breadth of options for viewing the media not only in your phone’s camera roll but in 10 popular galleries and social networks, including Dropbox, Facebook, Flickr, Foursquare, Google+, Instagram, Shutterfly, Smugmug, Tumblr and Twitter. Switch easily between timeline, places, faces, memories, favorites, screenshots and albums. Within each album, sort by album name, date taken, date modified, date created or number of pictures. Most importantly, users can use free-form search to find what they’re looking for.

    The primary drawback to so many options is that getting lost in the myriad of photos you’ve taken is easy. Moreover, by syncing so many galleries and networks, many of which have reposted images, users will likely see many duplicates. Nevertheless, this product probably enables you to find any image more quickly than any other solution to date.

    31Viewing photos with Picturelife’s Mobile App. (View large version32) Managing

    Below are screenshots of Apple’s Photos, Facebook and Picturelife.

    This is where many media galleries and camera apps diverge. Management workflows (tagging, arranging, moving, deleting, hiding) are incredibly diverse, and each app seems to prioritize its own variation. At a basic level, most apps enable users to move media between folders, to use a preset viewing filter to stay organized automatically and to delete photos. These actions can typically be done at the level of picture, selected group or album. However, apps vary widely in how they enable users to hide media, duplicate media, save copies and originals, export to other applications, comment, change meta data, unduplicate, and even link media galleries and social networks.

    Apple’s Photos, for instance, enables users to easily select one or more media files and move them between albums or delete them. Likewise, entire albums may be deleted. And a subset of Apple’s photo stream can sync locally, and third-party apps may store copies in Photos as well. However, you can’t manage these accounts from Photos directly. Any other advanced functionality for managing media doesn’t exist. It’s pretty basic.

    33Managing Photos with Apple’s Photo Mobile App. (View large version34)

    Facebook provides similar functionality. However, slightly more can be done on a mobile phone, including tagging people, liking media files, and viewing all cloud-stored album-organized media that include tags of the user or that are synced from a phone. While the experience of viewing all synced media in the mobile app is sluggish, the user at least isn’t limited to the local storage on their mobile device. In any case, Facebook is still a limited solution.

    35Managing Photos with Facebook’s Mobile App. (View large version36)

    Picturelife, by contrast, seems to have it all. Users can either touch and hold an image to see resizing options via drag-and-drop gestures or use a standard vertical menu to favorite photos, add them to albums, hide, delete, comment and more. The flexibility of the viewing options makes managing photos and videos effortless. However, a big drawback is that users can’t select multiple images to add them to a new album or to move them.

    37Managing Photos with Picturelife’s Mobile App. (View large version38) Sharing (Private and Public)

    Below are screenshots of Google+, Apple’s Photos and Picturelife.

    Sharing a single piece or a group of media publicly is baked into every photo and video application we looked at. Whether they share directly from their photo gallery of choice or save to their camera roll in order to share later on another platform, users have options. That being said, how users add, remove and view media before sharing, how they engage with it once shared, where exactly they may share and how they add and remove people to share with vary widely. More importantly, in recent years certain applications give users the option to share media privately with a select audience — a common activity in chat and email clients.

    Google+ is designed rather well to let users switch seamlessly between sharing a single photo, a selection of photos or an entire album, whether through Google+ itself or by saving directly to the phone’s camera roll. However, users will be sharing photos on Google’s network with “public” recipients as the default. If they want to send to individual recipients, they get a very limited subset of contacts to scroll through — and only within Google’s network — or a search box or preorganized list of contacts, which likely isn’t updated or properly maintained, especially compared to Facebook’s smart lists. Facebook is similar to Google in that it primarily lets you share media publicly with varying degrees of privacy. While Facebook Messenger’s integration of the camera roll into the private chatting experience is nice, users have no real way to send photos from a Facebook album directly to a private audience in chat.

    39Sharing photos with the Google+ Mobile App. (View large version40)

    Apple offers far greater flexibility with sharing on almost any social network, as well as through SMS and email. However, users get little assistance with selecting recipients and no additional organization of this sharing history, especially if they’ve ever shared across more than one channel. Users are also generally forced to share media publicly on social websites but can share privately through more traditionally private channels such as SMS and email.

    41Sharing photos with Apple’s Photos Mobile App. (View large version42)

    Picturelife, on the other hand, provides clear flexibility in sending media to a person or group through the phone’s address book or posting to one or more popular social networks. Each option is emphasized equally, so the user can decide how they want to share their photos and videos. Oddly enough for a mobile solution, the way of selecting contacts is extremely sluggish and seems to only offer email options and no SMS option.

    43Sharing photos with Picturelife’s Mobile App. (View large version44) Discussing (Private and Public)

    Below are screenshots of Apple’s Photos, Facebook and Instagram.

    While all of the limitations on posting publicly are due to the sharing limitations mentioned above, the designs to support private discussion are rather distinct from the designs for public discussion and seem to vary widely based on the product’s priorities.

    For example, on iOS, users can share multiple photos at once and include anyone in their address book (which is usually anyone with an email address or phone number), but they can’t add more people to the conversation on the fly or reply to a subset of recipients in a separate conversation or view their full media history in a consolidated display (because photos and videos are kept separate).

    45Discussing photos with Apple’s Photos Mobile App. (View large version46)

    Meanwhile, Facebook allows users to add new recipients, effectively creating a new chat. Additionally, Facebook more clearly displays the various ways users can communicate with recipients, not just by text, photo and video, but with audio and emojis; and the option to choose an existing photo or video or create a new one is obvious at a glance. However, users can only chat with people they’re connected with on Facebook, not anyone in their address book.

    47Discussing photos with Facebook’s Mobile App. (View large version48)

    Instagram, on the other hand, makes it very easy to switch between private and public discussions. When users post to their followers to have a public discussion, they can also post to popular websites such as Facebook, Twitter, Tumblr and Flickr to continue the conversation there. Alternatively, they can send a direct message to anyone they’re connected with on Instagram. Again, they’re limited to the social network itself, but this is a dramatic improvement over many of the social alternatives.

    49Discussing photos with Instagram’s Mobile App. (View large version50) Time To Focus And Design Carousel

    Now that we thoroughly understand Carousel’s core users, their general needs, Dropbox’s business needs and what exists on the market, it’s time to get something done.

    In part 2 of this series, we’ll detail the product’s primary requirements, summarize Carousel’s state at launch and its performance in the market, and highlight key learnings since the launch. Hopefully, this will help you to design your own MVP, however you like to do that — with whiteboards, sketches, Balsamiq, Photoshop, UXPin or something else.

    (al, ml, il)

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10
    11. 11
    12. 12
    13. 13
    14. 14
    15. 15
    16. 16
    17. 17
    18. 18
    19. 19
    20. 20
    21. 21
    22. 22
    23. 23
    24. 24
    25. 25
    26. 26
    27. 27
    28. 28
    29. 29
    30. 30
    31. 31
    32. 32
    33. 33
    34. 34
    35. 35
    36. 36
    37. 37
    38. 38
    39. 39
    40. 40
    41. 41
    42. 42
    43. 43
    44. 44
    45. 45
    46. 46
    47. 47
    48. 48
    49. 49
    50. 50

    The post Dropbox’s Carousel Design Deconstructed (Part 1) appeared first on Smashing Magazine.

    How I Built The One Page Scroll Plugin

    Mon, 08/25/2014 - 13:03

    Scrolling effects have been around in web design for years now, and while many plugins are available to choose from, only a few have the simplicity and light weight that most developers and designers are looking for. Most plugins I’ve seen try to do too many things, which makes it difficult for designers and developers to integrate them in their projects.

    Not long ago, Apple introduced the iPhone 5S, which was accompanied by a presentation website1 on which visitors were guided down sections of a page and whose messaging was reduced to one key function per section. I found this to be a great way to present a product, minimizing the risk of visitors accidentally scrolling past key information.

    I set out to find a plugin that does just this. To my surprise, I didn’t find a simple solution to integrate in my current projects. That is when One Page Scroll was born.

    What Is One Page Scroll?

    One Page Scroll is a jQuery plugin that enables you to create a single-scroll layout for a group of sections on a page with minimal markup.

    I will explain how I built this plugin, right from its inception through to planning, testing and finally putting the code out there for free.

    Note: Before building this plugin, I was aware of the controversy over “scroll-hijacking,” whereby a website overrides the native scrolling behavior of the browser to create its own interaction, which confuses some visitors. One Page Scroll would inevitably go against this principle, so I decided to come up with ways to ease the frustration. One good thing about the plugin is that developers may set a fallback that reverts scrolling from its “hijacked” state to its native behavior for certain screen sizes. This way, developers can maintain the high performance and quality of their websites on low-power devices, such as smartphones and tablets. Other than that, you can also control the length of the animation that takes the visitor from one section to the next, thus allowing you to avoid the slow transition seen on Apple’s iPhone 5S website2.

    What Is Its Purpose?

    As mentioned, most of the plugins that I found offer way too many unnecessary features, making them difficult to integrate. The purpose of this plugin is to solve this issue. The plugin had to:

    • be simple to use,
    • be easy to integrate,
    • require minimal markup,
    • do one thing well (i.e. scroll a page the way the iPhone 5S website does).
    1. To The Drawing Board

    I started by visualizing the plugin as a whole. It should enable visitors to scroll through each section of the page individually. To do that, I needed a way to disable the browser’s default scrolling behavior, while stacking each section in order and moving the page manually when the scrolling is triggered.

    3Visualize either in your mind or in sketches. (View large version4)

    After that, I broke the concept down into small tasks, trying to come up with a solution to each task in my mind. Here is a list of the functions and tasks that I came up with:

    1. Prepare the layout and position the sections.
      Disable the browser’s default scrolling behavior with CSS by applying overflow: hidden to the body tag. Position each section in sequence, while calculating and attaching all of the necessary information and classes.
    2. Set the manual scrolling trigger.
      Detect the scrolling trigger using jQuery, and then determine the direction, and then move the layout using CSS.
    3. Add features.
      Add responsiveness, looping, mobile swipe support, pagination, etc.
    4. Test across browsers.
      Make sure the plugin runs fine in all modern browsers (Chrome, Safari, Firefox, Internet Explorer 10) and on the most popular operating systems (Windows, Mac OS X, iOS and Android 4.0+).
    5. Open-source the plugin.
      Create a new repository, structuring it and writing instructions on how to use the plugin.
    6. Widen support.
      Explore other ways to increase support of the plugin.
    2. Building The Foundation

    Now that I had visualized the whole concept, I began to build the plugin with this template:

    !function($) { var defaults = { sectionContainer: "section", … }; $.fn.onepage_scroll = function(options) { var settings = $.extend({}, defaults, options); … } }($)

    The template starts off with a !function($) { … }($) module, which provides local scoping to the global variable for jQuery. The purpose of this function is to reduce the overhead for the jQuery lookup ($) and prevent conflicts with other JavaScript libraries.

    The defaults variable at the top holds the default options for the plugin. So, if you don’t define any options, it will fallback to these values.

    The $.fn.onepage_scroll function is the main function that initiates everything. Don’t forget to replace onepage_scroll with your own function name if you are creating your own.

    Disabling the scrolling behavior can be done easily by adding overflow: hidden to the body tag via CSS through a plugin-specific class name. Coming up with a plugin-specific class naming convention is important to avoid conflicts with existing CSS styles. I usually go with an abbreviation of the plugin’s name, followed by a hyphen and a descriptive word: for example, .onepage-wrapper.

    Now that all of the fundamentals are laid out properly, let’s build the first function.

    3. Prepare The Layout And Position The Sections

    Let’s get to the most interesting part: working out the calculation and instantly dropping all of my effort later in the process. I thought I needed to position each section in sequence by looping through each one and then positioning them, so that they do not overlap with each other. Here’s the snippet I came up with:

    var sections = $(settings.sectionContainer); var topPos = 0; $.each(sections, function(i) { $(this).css({ position: "absolute", top: topPos + "%" }).addClass("ops-section").attr("data-index", i+1); topPos = topPos + 100; });

    This snippet loops through each presented selector (sectionContainer is defined in the defaults variable), applies position: absolute and assigns each one with the correct top position that it needs to align properly.

    The top position is stored in the topPos variable. The initial value is 0 and increases as it loops through each one. To make each section a full page and stack up correctly, all I had to do was set the height of each section to 100% and increase the topPos variable by 100 every time it loops through a section. Now, each section should stack up correctly, while only the first section is visible to visitors.

    This might seem easy, but it took me a couple of hours to implement and to see how reliable it is, only to realize in the next step that I did not need any of this at all.

    4. Manual Trigger And Page Transformation

    You might think that the next step would be to move each section to its new position when the scrolling is triggered — I thought so, too. As it turns out, there is a better solution. Instead of moving every single section every time the user scrolls, which would require another loop through and another calculation, I wrapped all of the sections in one container and used CSS3’s translate3d to move the whole wrapper up and down. Because translate3d supports percentage-based values, we can use our previous top position calculation to move each section into the viewport without having to recalculate it. Another benefit is that this gives you control over the timing and easing settings of your animation.

    As you may have noticed, this solution makes the positioning snippet illustrated in the previous step unnecessary because the wrapper that we’ve introduced makes each section stack up correctly without any extra styling required.

    5The first solution you come up with is not always the most efficient, so make sure to leave time for experimentation. (View large version6)

    Now, all we have to do is detect the direction of the user’s scrolling and move the wrapper accordingly. Here’s the code to detect the scrolling direction:

    function init_scroll(event, delta) { var deltaOfInterest = delta, timeNow = new Date().getTime(), quietPeriod = 500; // Cancel scroll if currently animating or within quiet period if(timeNow - lastAnimation < quietPeriod + settings.animationTime) { event.preventDefault(); return; } if (deltaOfInterest < 0) { el.moveDown() } else { el.moveUp() } lastAnimation = timeNow; } $(document).bind('mousewheel DOMMouseScroll', function(event) { event.preventDefault(); var delta = event.originalEvent.wheelDelta || -event.originalEvent.detail; init_scroll(event, delta); });

    In the snippet above, first I bind a function to the mousewheel event (or DOMMouseScroll for Firefox), so that I can intercept the scrolling data to determine the direction of the scrolling. By binding my own init_scroll function in these events, I’m able to pass the available wheelData to init_scroll and detect the direction.

    In a perfect world, all I would have to do to detect and move each section is retrieve the delta from the wheelData variable, use the value to determine the direction and perform the transformation. That, however, is not possible. When you are dealing with a sequencing animation, you must create a fail-safe to prevent the trigger from doubling, which would cause the animation to overlap. We can use setInterval to sort this problem out by calling each animation individually, with its own time set apart to create a sequence. But for precision and reliability, setInterval falls short because each browser handles it differently. For example, in Chrome and Firefox, setInterval is throttled in inactive tabs, causing the callbacks not to be called in time. In the end, I decided to turn to a timestamp.

    var timeNow = new Date().getTime(), quietPeriod = 500; … if(timeNow - lastAnimation < quietPeriod + settings.animationTime) { event.preventDefault(); return; } … lastAnimation = timeNow;

    In the snippet above (extracted from the previous one), you can see that I have assigned the current timestamp to the timeNow variable before the detection, so that it can check whether the previous animation has performed for longer than 500 milliseconds. If the previous animation has performed for less than 500 milliseconds, then the condition would prevent the transformation from overlapping the ongoing animation. By using a timestamp, instead of setInterval, we can detect the timing more accurately because the timestamp relies on the global data.

    if (deltaOfInterest < 0) { el.moveDown() } else { el.moveUp() }

    The moveUp and moveDown are functions that change all attributes of the layout to reflect the current state of the website. Data such as the current index, the name of the current section’s class and so on are added in these functions. Each of these functions will call the final transform method to move the next section into the viewport.

    $.fn.transformPage = function(settings, pos, index) { … $(this).css({ "-webkit-transform": ( settings.direction == 'horizontal' ) ? "translate3d(" + pos + "%, 0, 0)" : "translate3d(0, " + pos + "%, 0)", "-webkit-transition": "all " + settings.animationTime + "ms " + settings.easing, "-moz-transform": ( settings.direction == 'horizontal' ) ? "translate3d(" + pos + "%, 0, 0)" : "translate3d(0, " + pos + "%, 0)", "-moz-transition": "all " + settings.animationTime + "ms " + settings.easing, "-ms-transform": ( settings.direction == 'horizontal' ) ? "translate3d(" + pos + "%, 0, 0)" : "translate3d(0, " + pos + "%, 0)", "-ms-transition": "all " + settings.animationTime + "ms " + settings.easing, "transform": ( settings.direction == 'horizontal' ) ? "translate3d(" + pos + "%, 0, 0)" : "translate3d(0, " + pos + "%, 0)", "transition": "all " + settings.animationTime + "ms " + settings.easing }); … }

    Above is the transform method that handles the movement of each section. As you can see, I’ve used the CSS3 transformation to handle all of the manipulation with JavaScript. The reason I did this in JavaScript, rather than in a separate style sheet, is to allow developers to configure the behavior of the plugin — mainly the animation’s timing and easing — through their own function calls, without having to go into a separate style sheet and dig for the settings. Another reason is that the animation requires a dynamic value to determine the percentage of the transition, which can only be calculated in JavaScript by counting the number of sections.

    5. Additional Features

    I was reluctant to add features at first, but having gotten so much great feedback from the GitHub community, I decided to improve the plugin bit by bit. I released version 1.2.1, which adds a bunch of callbacks and loops and, hardest of all, responsiveness.

    In the beginning, I didn’t focus on building a mobile-first plugin (which I still regret today). Rather, I used a simple solution (thanks to Eike Send7 for his swipe events) to detect and convert swipe data into usable delta data, in order to use it on my init_scroll function. That doesn’t always yield the best result in mobile browsers, such as custom Android browsers, so I ended up implementing a fallback option that lets the plugin fall back to its native scrolling behavior when the browser reaches a certain width. Here’s the script that does that:

    var defaults = { responsiveFallback: false … }; function responsive() { if ($(window).width() < settings.responsiveFallback) { $("body").addClass("disabled-onepage-scroll"); $(document).unbind('mousewheel DOMMouseScroll'); el.swipeEvents().unbind("swipeDown swipeUp"); } else { if($("body").hasClass("disabled-onepage-scroll")) { $("body").removeClass("disabled-onepage-scroll"); $("html, body, .wrapper").animate({ scrollTop: 0 }, "fast"); } el.swipeEvents().bind("swipeDown", function(event) { if (!$("body").hasClass("disabled-onepage-scroll")) event.preventDefault(); el.moveUp(); }).bind("swipeUp", function(event){ if (!$("body").hasClass("disabled-onepage-scroll")) event.preventDefault(); el.moveDown(); }); $(document).bind('mousewheel DOMMouseScroll', function(event) { event.preventDefault(); var delta = event.originalEvent.wheelDelta || -event.originalEvent.detail; init_scroll(event, delta); }); } }

    First, I’ve defined a default variable to activate this fallback. The responsiveFallback is used to determine when the plugin should trigger the fallback.

    The snippet above will detect the browser’s width to determine whether the responsive function should run. If the width reaches the value defined in responsiveFallback, then the function will unbind all of the events, such as swiping and scrolling, return the user to the top of the page to prepare for realignment of each section, and then reenable the browser’s default scrolling behavior so that the user can swipe through the page as usual. If the width exceeds the value defined, then the plugin checks for a class of disabled-onepage-scroll to determine whether it has already been initialized; if it hasn’t, then it is reinitialized again.

    The solution is not ideal, but it gives the option for designers and developers to choose how to handle their websites on mobile, rather than forcing them to abandon mobile.

    6. Cross-Browser Testing

    Testing is an essential part of the development process, and before you can release a plugin, you must make sure that it runs well on the majority of machines out there. Chrome is my go-to browser, and I always start developing in it. It has many benefits as one’s main development browser, but your personal preference might vary. For me, Chrome has a more efficient inspection tool. Also, when I get a plugin to work in Chrome, I know that it will probably also work in Safari and Opera as well.

    I mainly use my Macbook Air to develop plugins, but I also have a PC at home to check across platforms. When I get a plugin to work in Chrome, then I’ll test manually in Safari, Opera and (lastly) Firefox on Mac OS X, followed by Chrome, Firefox and Internet Explorer (IE) 10 on Windows.

    The reason I test only these browsers is that the majority of users are on them. I could have tested IE 9 and even IE 8, but that would have prevented me from releasing the plugin in time with the launch of the iPhone 5S website.

    This is generally not a good practice, and I’ll avoid doing it in future. But the good thing about making the plugin open-source is that other developers can help patch it after its release. After all, the purpose of an open-source project is not to create the perfect product, but to create a jumping-off point for other developers to extend the project to be whatever they want it to be.

    8Don’t forget to test on mobile devices before launching your plugin. (View large version9)

    To ease the pain of cross-browser testing, every time I complete a plugin, I’ll create a demo page to show all of the features of the plugin, and then I’ll upload it to my website and test it, before sharing the plugin on GitHub. This is important because it enables you to see how the plugin performs in a real server environment and to squash any bugs that you might not be able to replicate locally. Once the demo page is up and running on my website, I’ll take the opportunity to test the plugin on other devices, such as phones and tablets.

    With these tests, you will have covered the vast majority of browsers out there and prepared the plugin for the real world.

    7. Open-Sourcing Your Plugin

    When you think the plugin is ready, the final step is to share it on GitHub. To do this, create an account on GitHub, set up Git10 and create a new repository11. Once that is done, clone the repository to your local machine. This should generate a folder with your plugin’s name on your local machine. Copy the plugin to the newly created folder and structure your repository.

    Repository Structure

    How you structure your repository is all up to you. Here’s how I do it:

    • The demo folder consists of working demos, with all required resources.
    • The minified and normal versions of the plugin are in the root folder.
    • The CSS and sample resources, such as images (if the plugin requires it), are in the root folder.
    • The readme file is in the root directory of the generated folder.
    Readme Structure

    Another important step is to write clear instructions for the open-source community. Usually, all of my instructions are in a readme file, but if yours require a more complex structure, you could go with a wiki page on GitHub. Here is how I structure my readme:

    1. Introduction
      I explained the purpose of the plugin, accompanied by an image and a link to the demo.
    2. Requirements and compatibility
      Put this up front so that developers can see right away whether they’ll want to use the plugin.
    3. Basic usage
      This section consists of step-by-step instructions, from including the jQuery library to adding the HTML markup to calling the function. This section also explains the options available for developers to play with.
    4. Advanced usage
      This section contains more complex instructions, such as any public methods and callbacks and any other information that developers would find useful.
    5. Other resources
      This section consists of links to the tutorial, credits, etc.
    8. Widening Support

    This plugin doesn’t really need the jQuery library to do what it does, but because of the pressure to open-source it in time for the iPhone 5S website, I decided to take a shortcut and rely on jQuery.

    To make amends, and exclusively for Smashing Magazine’s readers, I have rebuilt One Page Scroll using pure JavaScript (a Zepto version is also available). With the pure JavaScript version, you no longer need to include jQuery. The plugin works right out of the box.

    Pure JavaScript And Zepto Version Rebuilding the Plugin in Pure JavaScript

    The process of building support for libraries can seem daunting at first, but it’s much easier than you might think. The most difficult part of building a plugin is getting the math right. Because I had already done that for this one, transforming the jQuery plugin into a pure JavaScript one was just a few hours of work.

    Because the plugin relies heavily on CSS3 animation, all I had to do was replace the jQuery-specific methods with identical JavaScript methods. Also, I took the opportunity to reorganize the JavaScript into the following standard structure:

    • Default variables
      This is essentially the same as the jQuery version, in which I defined all of the variables, including the default variables for options to be used by other functions.
    • Initialize function
      This function is used for preparing and positioning the layout and for the initialization that is executed when the onePageScroll function is called. All of the snippets that assign class names, data attributes and positioning styles and that bind all keyboard inputs reside here.
    • Private methods
      The private method section contains all of the methods that will be called internally by the plugin. Methods such as the swipe events, page transformation, responsive fallback and scroll detection reside here.
    • Public methods
      This section contains all of the methods that can be called manually by developers. Methods such as moveDown(), moveUp() and moveTo() reside here.
    • Utility methods
      This section contains all of the helpers that replicate a jQuery function to speed up development time and slim down the JavaScript’s file size. Helpers such as Object.extend, which replicates the jQuery.extend function, reside here.

    I ran into some annoyances, such as when I had to write a method just to add or remove a class name, or when I had to use document.querySelector instead of the simple $. But all of that contributes to a better, more structured plugin, which benefits everyone in the end.

    Rebuilding the Plugin in Zepto

    The reason why I decided to support Zepto, despite the fact that it only supports modern browsers (IE 10 and above), is that it gives developers a more efficient and lightweight alternative to jQuery version 2.0 and above, with a more versatile API. Zepto’s file size (around 20 KB) is considerably lower than jQuery 2.0’s (around 80 KB), which makes a big difference in page-loading speed. Because websites are being accessed more on smartphones, Zepto might be a better alternative to jQuery.

    Rebuilding a jQuery plugin with Zepto is a much easier task because Zepto is similar to jQuery in its approach to the API, yet faster and more lightweight. Most of the script is identical to the jQuery version except for the animation part. Because Zepto’s $.fn.animate() supports CSS3 animation and the animationEnd callback right off the bat, I can take out this ugly snippet:

    $(this).css({ "-webkit-transform": "translate3d(0, " + pos + "%, 0)", "-webkit-transition": "-webkit-transform " + settings.animationTime + "ms " + settings.easing, "-moz-transform": "translate3d(0, " + pos + "%, 0)", "-moz-transition": "-moz-transform " + settings.animationTime + "ms " + settings.easing, "-ms-transform": "translate3d(0, " + pos + "%, 0)", "-ms-transition": "-ms-transform " + settings.animationTime + "ms " + settings.easing, "transform": "translate3d(0, " + pos + "%, 0)", "transition": "transform " + settings.animationTime + "ms " + settings.easing }); $(this).one('webkitTransitionEnd otransitionend oTransitionEnd msTransitionEnd transitionend', function(e) { if (typeof settings.afterMove == 'function') settings.afterMove(index, next_el); });

    And I’ve replaced it with this:

    $(this).animate({ translate3d: "0, " + pos + "%, 0" }, settings.animationTime, settings.easing, function() { if (typeof settings.afterMove == 'function') settings.afterMove(index, next_el); }); }

    With Zepto, you can animate with CSS3 without having to define all of the CSS styles or bind the callback yourself. Zepto handles all of that for you through the familiar $.fn.animate() method, which works similar to the $.fn.animate() method in jQuery but with CSS3 support.

    Why Go Through All the Trouble?

    Because jQuery has become many people’s go-to library, it has also become increasingly complex and clunky and at times performs poorly. By providing versions for other platforms, you will increase the reach of your plugin.

    Going back to the foundation will also help you to build a better, more compliant plugin for the future. jQuery and other libraries are very forgiving of minor structural problems, like missing commas and $(element) — the kinds of things that have made me a little lazy and could compromise the quality of my plugins. Without all of these shortcuts in pure JavaScript, I was more aware of what’s going on in my plugin, which methods are affecting performance and what exactly I can do to optimize performance.

    Even though JavaScript libraries such as jQuery have made our lives easier, using one might not be the most efficient way to accomplish your goal. Some plugins are better off without them.


    There you have it, the process I went through to build One Page Scroll. I made many mistakes and learned from them along the way. If I were to develop this plugin today, I would focus more on mobile-first and would add more comments to the code so that other people would be able to extend the plugin more easily.

    Without the support of design and development communities such as GitHub, StackOverflow and, of course, Smashing Magazine, I wouldn’t have been able to create this plugin in such a short time. These communities have given me so much in the past few years. That is why One Page Scroll and all of my other plugins14 are open-source and available for free. That’s the best way I know how to give back to such an awesome community.

    I hope you’ve found this article useful. If you are working on a plugin of your own or have a question or suggestion, please feel free to let us know in the comments below.


    (al, il, ml)

    1. 1
    2. 2
    3. 3
    4. 4
    5. 5
    6. 6
    7. 7
    8. 8
    9. 9
    10. 10
    11. 11
    12. 12
    13. 13
    14. 14
    15. 15
    16. 16
    17. 17
    18. 18
    19. 19

    The post How I Built The One Page Scroll Plugin appeared first on Smashing Magazine.


    Secure Login

    This login is SSL protected