More than Just Design: How You Can Significantly Boost Performance with Code Optimizations

When it comes to technology, we are not particularly patient. A snappy user experience beats a glamorous one. We engage more with a site when we can move freely and focus on the content, instead of an endless wait. User Experience research recommends the following responsiveness guidelines:

  • Delays <0.1 seconds feel instantaneous.
  • Delays < 1 second are the limit for a user’s flow of thought to stay uninterrupted. The user will notice the delay but still be able to focus on the task at hand.
  • Delays between 2 to 10 seconds should have a spinning icon to visually indicate something is happening.
  • Delays > 10 seconds should have a percent-done indicator along with a cancel button.

Just to put these requirements in context, it takes between 0.1 and 0.4 seconds to blink. It takes six seconds to breathe. With these kinds of demands, if you don’t design with performance in mind, nobody will use the product. But performance isn’t as simple as you might think.

Why Design Is Critical in a World Where Not All Operations Are Created Equal

To deliver the functionality of an application, many different types of tasks need to be performed: reading data from a database, performing calculations, security checks and rendering User Interface (UI) components. From the perspective of performance, not all operations are created equal. Operations performed in memory are orders-of-magnitude faster than operations that require calls over the network.

For example, it takes 20 milliseconds to read 1 MB sequentially from a disk, whereas it only takes 250 microseconds to read 1 MB sequentially from memory.

When starting the design for your application, you will want to consider the most critical parts and come up with an architecture that is appropriate for the problem at hand. You will need to start with objective goals.

  1. How long should an average request take?
  2. How many concurrent users should the application support?
  3. What is the peak load the application must handle?
  4. What is the deployment environment for the application?

Think of it this way: If you’re using a hacksaw to clear-cut a forest, you are going to have a bad day. If you don’t have the right tools for the job, no amount of heroics is going to help.

Using a layered design will make identifying and correcting performance problems much simpler. There are many potential reasons for a sluggish application. Having multiple types of operations occurring within the same chunk of code can obfuscate which chunks are slow and why. It also makes it more difficult to optimize without breaking unrelated code.

How to Use Code Optimizations Effectively to Maximize Performance

Code optimizations change already-correct code so that it executes faster. To do it effectively, first define your optimization goals clearly and measure the impact of the changes you make using a profiling tool. Do not start performing optimizations without a profiling tool of some sort. Performance problems are often not obvious, and the changes you introduce to the code base could very well make it perform worse!

Before you optimize, you’ll want to make sure the code works. Then you’ll want to refactor it to make it right and easy to understand. Only then should you optimize it.

Code optimization looks different depending on the type of operation.

  1. For I/O-bound operations, you will want to first identify the costly operations during application profiling. This might include disk reads, web service calls or database calls. Once identified, you might consider rewriting the slow parts using either multithreading or batching:
    • Multithreading executes multiple threads simultaneously. If you multi-thread code, break the task into individual pieces that don’t require communication and locking. Race conditions and deadlocks are very difficult to identify and fix.
    • Batching updates the interface to perform a batch of work instead of making multiple network calls. By batching work, you limit the number of expensive network calls that are made.
  2. For CPU-bound optimizations, know your algorithms and data structures. Restructure your applications to use the most appropriate structure. For example, if you need to determine the intersection of two lists in memory, a hashset will be much faster than a list. The complexity class for performing a lookup is O(1) for a hashset instead of O(N) for a list. Comparing two lists with 100,000 items, that is a difference between 100,000 and 10 billion operations.

If you want to produce a product people will use, performance must be at the front of your mind. Ultimately, if the application design is faulty, code optimizations probably couldn’t save the application at the last minute. But once the design is right, code optimizations can be used to identify and remove the hot path within the application to really set your application apart.

At Daugherty, our Dev Center contains software engineers who have been involved in all aspects of delivering high-quality software: from requirements analysis to deployment automation. If you’re a developer ready to work for many different brand-name clients, developing and delivering pragmatic solutions to their business needs, join our team today.

JasonTuran

Jason Turan is a Senior Consultant at Daugherty Business Solutions. He has worked on a variety of platforms and industries since 2008 and particularly enjoys creating simple yet flexible applications.

How to Succeed with Big Data without an Admin Team

New technology often comes with a new way of thinking. Usually, people are quick to embrace the exterior — the “shiny new object” component of new technology — but evolving a mindset is more difficult.

Developers and architects are no different.

For example, when the three pillars associated with Big Data first hit the scene (horizontal scaling, self-healing clusters and data locality), they required a shift in thinking.

Before horizontal scaling, organizations were used to vertical scaling: The 286 would get thrown out to make room for a 368, which was later discarded for a 486, which in turn was discarded for a Pentium and so on. Software works the same. Oracle Server becomes Oracle RAC and is later upgraded to Oracle Exadata. But horizontal scaling meant that adding capacity was not a matter of discarding servers for upgraded models. Increasing CPU, memory or disk capacity was simply a matter of adding a server to the cluster.

Today, many organizations prefer Hadoop to build computer clusters from commodity hardware, because it is an open-source software framework that assumes hardware failures are common occurrences and should be automatically handled by the framework.

But Big Data doesn’t have to mean just Hadoop anymore.

Getting Your Organization’s Head in the Cloud

Hadoop is fantastic, but it has its limitations. Some of the downsides to Hadoop include:

  • You need a Hadoop administration group who can tweak the installation for performance. Hadoop nowadays comes with many components, which means it can get complicated fast.
  • Your servers generally need to be up and running. Even if you aren’t using your servers heavily, you might end up still paying for them using Hadoop.

Many organizations, as a result, are moving to the cloud. It’s easy to scale up, and it takes care of some key voids within its infrastructure. For example, you would never have to worry about swapping out a hard disc. The technology makes a virtual cluster of computers available at all times.

But using the cloud for Big Data? For not just a few gigabytes, but petabytes of information?

The cloud is capable of handling many of these components.

Nine Big Data Components the Cloud Handles as Good as Hadoop

Nine Big Data components the cloud can handle just as well as Hadoop include:

  1. Data ingestion. Data ingestion is the process of obtaining and importing data for immediate use or storage in a database. Using the cloud, you can set up an API and connect to computer services that allow server-less code execution. Your organization would pay only for code execution time. Compare this to Hadoop, in which your servers generally need to be up and running.
  2. Data processing in real time. Almost every organization today demands data processing in real time: a constant input, process and output of data. Using the cloud, your organization can stream data in real time.
  3. Storing data. Data stored in the cloud is highly available and highly durable. Versioning can be turned on.
  4. Analyzing data. Your organization can analyze data in a variety of ways: regression, in which the output variable takes continuous values, or classification, in which the output variable takes class labels. Data can also be analyzed in batch or synchronous modes.
  5. Search. Elastic search opportunities in the cloud are designed for horizontal scalability, reliability and easy management.
  6. Scalability. Most cloud-based services would allow your organization to add resources to a cluster with the click of a button.
  7. Fault tolerance. One of the pillars of Big Data is that it be self-healing: Data is replicated all over the place. On a physical machine, if you were to blow away a cluster with a shotgun, the cluster would figure out what the machine was doing and save the data. Cloud-based services are no different. They can view infrastructure as code and thus be automated to react in the case of an event.
  8. Security. Policies can be set up for each resource with fine-grained access control. While this is arguably one area that could still call for a skilled administrator even in the cloud, security for your clusters can still be managed through the cloud itself.
  9. Cost. Cloud-based services aren’t like your gym membership. In general, you don’t have to sign up for an impossible-to-get-out-of contract, and you only pay for what you use.

Gone are the days where Big Data requires an onsite administrative team. With the cloud, your organization can achieve the same business insights at a lower price point. Daugherty can help you adapt to this new way of thinking. For more information, contact us today.

Are You Wasting Time and Money on Bad Planning?

The project itself probably isn’t what’s taking most of your time. Sometimes, 80 percent of a project is in the planning. Engineers consider features like asset conditions, systems, crews, vendors and workflows, then the costs of each.

But what if you could plan with twice the focus in half the time?

If you think it’s too good to be true, consider this:

On the surface, planning is easy. You incorporate triggering events among human and systems actors into a swim lane, then build out a functional decomposition to show the individual elements of a business process. From there, you can estimate the time and cost for each business process.

It’s scaling this, however, where cracks tend to appear, resulting in challenges during the implementation.

Why Something as Simple as Planning Often Proves Difficult

Perhaps the planning process works well for projects that cost between tens of thousands of dollars to the low millions, with durations less than a calendar year.

But the process may not be consistently applied, nor well understood across stakeholders. Results may not spring from project objectives because of misaligned expectations, unexpected issues, perfectionism, stakeholder demands or scope shifting.

Whatever the case, it might seem intuitive to hold working sessions that extend daily over weeks or months, with too many semi-contributors not fully contributing or engaged to the end.

For example, what better way to avoid misunderstandings than to lay out all aspects of the project unambiguously for both project managers and business analysts? After all, both are charged with meeting project objectives, developing goals, identifying risks and developing strategic plans.

Right?

Wrong.

While project managers and business analysts have similar roles, they differ from one another in key areas.

Project managers manage schedule, cost and scope. Business analysts develop work breakdown structures (WBS) and tasks, elicit requirements from stakeholders and subject matter experts (SMEs), and manage the information that comes from them.

Consequently, bringing the two together for an eight-hour session not only eats up a huge chunk of their time, it at worst can cause the very misunderstandings you’re trying to avoid.

For a superior method, consider this.

How to Rapidly Improve Business with an RPM Approach and Block Schedule

In a nutshell, the challenge to large-scale planning and estimation is scope management. Modeling working sessions after a Rapid Process Map (RPM) block schedule can save time and money.

With the RPM approach, you’d engage four hours per day across two sessions, with a focus on having the right people in the room, rather than incorporating everyone all at once for the duration. This could potentially cut the time in half and lead to more focused work sessions with aligned and communicated goals. Employees could keep up with their day jobs, yet engage fully during the work-sessions.

The result — a solid scope management plan and a requirements plan.

The scope management plan would document how the project scope will be defined, validated and controlled. The requirements plan would describe how project requirements will be analyzed, documented and managed.

Next would come the process of implementing your project objectives.

How to Guarantee Results by Detecting Hidden Scope and Technical Issues

With planning, a five-step Initial Estimating Sprint Zero can achieve a 75 percent confidence level. The five-step approach provides early detection of “hidden scope” and technical issues.

The five steps include Scoping and Prioritization; Decomposition; Abbreviated Impact Analysis; Budget Estimating; and Initial Program Planning.

    Scoping and Prioritization considers core objectives, phases and key dates.

    Decomposition breaks down high-level processes into individual steps and systems.

    Abbreviated Impact Analysis identifies and prioritizes features and reports, tying the features to the objective.

    Budget Estimating considers the cost of implementing these features based on the amount of hours they will take. A top-down approach is most effective here. Start with epics, which are a collection of user stories usually short on the details. Then work down from there.

    Initial Program Planning tunes financial estimations by engaging all the user stories from the bottom up to determine options and potential savings.

Daugherty RPM is Daugherty’s trademark innovative approach to handling large-scale estimation. We can provide a faster, better and cheaper solution with our approach, which has been proven in our success with large-scale organizations like Anheuser-Busch, The Home Depot and MasterCard. Whatever size estimation you need, Daugherty can deliver. Contact us today.

Privacy by Design: Why It’s Critical to Your Organization

No surprise: Privacy is a big deal.

If your organization is storing data about people, privacy should be a big deal to you. This is especially true if you’re storing data about people in the European Union (EU), in light of General Data Protection Regulation (GDPR), which will go in effect May 25, 2018.

GDPR is designed to strengthen and unify data protection for all individuals within the EU and give control back to citizens over their personal data. Under GDPR, organizations in breach can be fined up to four percent of their annual global turnover (revenue) or €20 million (whichever is greater).

This is the maximum fine, which can be imposed for the most serious infringements, like not having sufficient customer consent to process data.

And it’s per infringement.

So what precisely does it mean to have data that aren’t properly anonymized?

The Look of Data that Can Incur Hefty Fines

Weak anonymization algorithms are one way of violating user privacy.

Remember Where’s Waldo? In the books, Waldo is hidden among a large crowd, and we are invited to pore over the pages, scanning for his trademark red-and-white-striped shirt, bobble hat and glasses. Knowing what to look for makes it slightly easier, although the books introduce red herrings to make Waldo more difficult to spot.

Imagine if an entire Where’s Waldo? illustration contained a mass of people all wearing dull green, surrounded by dull green landmarks, and Wally in his trademark red.

He’d be really easy to spot.

A GDPR infringement occurs if somebody can determine a person’s identity through data, even if anonymization algorithms are in place. Best intentions don’t matter.

One of the best methodologies your organization can institute to comply with GDPR is to adopt a Privacy by Design approach to your systems.

Privacy by Design: Where Outcomes, Not Intentions, Matter

Privacy by Design is an approach to systems engineering that takes privacy into account throughout the whole engineering process.

It’s not about data protection per se.

Rather, the system is engineered in such a way that it doesn’t need protection.

The root principle is based on enabling service without having the client become identifiable or recognizable.

Three examples of Privacy by Design include:

    1. Dynamic Host Configuration Protocol (DHCP). With DHCP, a server maintains a pool of IP addresses, and randomly assigns an IP address to a device. Because the IP address is “leased” to a device, it doesn’t leak personal identifiers about the person using the device.

    2. Global Positioning System (GPS). A GPS device doesn’t require you to transmit data; rather, it relies on signals transmitted from GPS satellites whose positions are known. Without leaking your identity or location, it can provide you your geographic location.

    3. Radio-Frequency Identification (RFID). As it pertains to the Internet of Things (IoT), RFID can act as the bridge between the physical and digital world. The RFID tag is preregistered with the host system to establish identification. Then the tag communicates only by broadcasting its ID.

Zero-knowledge proof is one way you can implement Privacy by Design. It is a means of establishing proof by using something other than personal identifiers.

For example, a gambling website may use a Facebook sign-in, which can guarantee proof of age by asking Facebook.

In another example, a risqué game in the 1980s might ask questions about baseball players that only an older audience would know. Of course, the questions didn’t prevent a baseball-prodigy youngster from gaining access.

How Privacy by Design is achieved depends on the application, technologies and choice of approach. Daugherty can anonymize your data to protect you from penalties through GDPR. We can also analyze it to determine opportunities in the data where your organization can expand. Contact us today.

Making Your Business Intelligence Team Agile

So, you think you can’t make Agile work within your business intelligence team?  Think again.

Agile business intelligence isn’t new; many people have written about it.  It’s mature enough now to have a smattering of books lining the shelves. What are these books based on? Experience. People are executing it and have been for some time.

At Daugherty Business Solutions, we have a group of those experienced professionals on our team. In fact, we have worked with a large telecommunications client for fourteen years using parallel Agile sprints to support enterprise financial performance reporting. The group in place has evolved from large teams to manageable teams of eight, and members have cross trained between ETL and data visualization tools.

The evolution of these teams was not an accident, but instead an orchestration of the Agile methodology. Some people may think of the term Agile and wonder if that applies to business intelligence in the same way that it does application development, and there is no doubt that there is a difference between the two. But, the specific factors differentiating these doesn’t need to be standing in the way of organizations realizing the benefits of using Agile principles to bring more value to their customers, more quickly, and with less overhead.

So, how do you do it?  Before making that determination, there are two key things that need to be identified.

First, data projects are different. You have to acknowledge it.  Sing it from the rooftops! Just like we cannot use exactly the same team and methods to successfully execute a Waterfall application development project and a Waterfall data project, we certainly can’t use the same team and methods to execute an Agile application development project and an Agile data project.

For example, Team Daugherty is working with a cable company’s business intelligence group to build Agile capabilities into their current teams. As we launch projects with them, we’re working through our planning sprints and introducing an additional sprint to allow for some of the coordination and planning required when implementing projects. This is important when using full enterprise data stacks and fully mature data warehouses. By planning more up front, we’re able to fully implement Agile practices, and we are seeing an extremely positive response from our business partners.

Second, because information management projects are different and have different players, characteristics and challenges, we’re going to have to let go of some of our purist notions of Agile.  Think for a minute about this question: what is Agile at its core?  Is it ceremonies?  Is it a product backlog? Is it cadence? Agile is not one of these things. These are all things we do within specific Agile methodologies. To be Agile is to generate value for the business in a consistent, timely manner. The rest are just rules.

Does this mean that we want to throw all of these guidelines out the window? No! What it does mean is if a particular rule is preventing us from achieving our goal of realizing the benefits of Agile, we should evaluate it for validity within the information management discipline and weigh that against how that particular rule helps teams realize specific Agile benefits.

This is true for a number of the projects we are working on right now. One that is top of mind is a project that has Daugherty working with the business intelligence group at a retail client to define how Agile principles will be applied on their teams. As we know, one size does not fit all. In this IM project example, unlike application development projects, it does have some stories that must be executed in a specific order. Working together with the client’s teams and product owners, we’re redefining how the stakeholders prioritize features and the value produced in each sprint, knowing that some of our work items must be executed in a specific order. It may be a specific Agile rule that work items are designed to be interchangeable and can be re-ordered at any time, but we know that, in this situation, that is not what is going to lead us to be the most successful.

Just remember, we’re going to have to let go of “purist” rules and methods, weighing them carefully against the value they bring. Through our work we know that it has to be okay to sequence some work items in a specific order to deliver value to the business.

When working as a part of an Agile team at Daugherty Business Solutions, I have learned that we value “value” more than rules.

Driving Customer Loyalty Strategy Through Data Analytics

With the ease of online shopping, the pop-up ads sharing news of sales and low prices, and the help (or harm) of online reviews, many companies are feeling pressure from their competitors. The challenge of gaining return customers is at the forefront of the battle for survival and many organizations are turning to loyalty programs as a way of incentivizing their products and services. There are a number of different loyalty programs out there, however, the most commonly used ones measure the amount of customer engagements (number of visits, money spent, products purchased, services used, etc.) and allow customers to accumulate points for being a loyal customer. As an Information Architect, I spend my days helping clients to understand analytics and insights resulting from these programs. Let’s take a closer look at some examples and their affects.

Customer Loyalty Reward Types

Not all customers are profitable; similarly, not all rewards lead to loyalty. When measuring the success of loyalty programs, it is important to take a hard look at various rewards and measure which ones lead to the best level of engagement.

Some loyalty programs have simple rewards. For example, Kroger, the grocery chain, provides instant rewards to customers in the form of a discount for being a Kroger Plus customer. There is no process of accumulating points and redeeming them.

On the more complex end of the loyalty spectrum are the organizations that utilize programs in which customers accumulate points that they are then able to redeem for different types of rewards. You may be familiar with some of these programs. One example is the program in place for American Express customers. These users accumulate points simply by using their American Express credit card. The points they earn can be used for travel expenses (flight, hotel, etc.), gift cards, merchandise, or the customer can choose to receive cash back. Similarly, airlines such as Delta and Southwest, give you a host of redemption options to choose from just for choosing their airline. Another example of this type of loyalty rewards program is Dunkin Donuts offer of free beverages for the customers that buy their products most often.

Evaluating Rewards Using Data Analytics Techniques

Assuming your company offers a variety of redemption options, the most accurate way to measure the effectiveness of the different options is by conducting a pre-post analysis using matched pair design, a special case of the randomized block design statistical experimentation technique. More simply, individuals redeeming their rewards are divided into pairs of similar customers. Well, what does that mean? The way you define “similar” is based on your specific definition. Generally speaking, you will match customers using attributes such as age, gender, loyalty class (basic, premium, etc.), tenure, market/location or previous revenue spend. When creating these matched pairs, you will ensure that while the customers are similar on all other attributes, one of them has redeemed the reward you are evaluating while the other has not. This essentially gives you a control group of customers. The next step is to measure the customer’s behavior before and after the redemption activity. The measurement can be revenue, number of visits, number of trips/miles, etc. If you have succeeded in creating these pairs, you should find that the pre behavior of the control and target groups will be similar. You will then be able to observe the behavior post redemption and measure the lift the reward/redemption generates.

Potential Pitfalls

Although this form of evaluation does give you measurable results, there are many factors that you need to be aware of. For example, you could introduce sampling bias if you start by choosing the wrong group of customers. Many large corporations have tens of millions of customers in their loyalty program and only the active ones are maintained in a readily available database. This makes sense for different types of analysis as it improves the speed and performance of queries. However, if you start with this set of active customers, it could introduce sampling bias as you will probably miss customers that moved toward other vendors/loyalty programs after using a certain reward. In this instance, my recommendation would be to use a random sample from all customers, both active and inactive.

Other factors to keep in mind are the enrollment date of customers and the period for the analysis. You want to avoid customers that have only been members for a partial time period. The recommendation here is to choose customers that became enrolled before the date of the analysis and ensure that specific pairs of customers have similar tenure, as tenure can influence the level of engagement in certain industries.

In most cases customers do not only engage with one type of reward. In order to ensure that the analysis is specific to the reward being evaluated, its important to try to choose customers that exclusively participated in the given reward.

Linking ROI to Various Customer Loyalty Rewards

The result of your analysis will give you the metrics you are looking for relating to the effectiveness of each reward and the amount of lift it is generating. You can conduct a bit more analysis to compute the incremental revenue as well.  For example, if you know the dollars spent per month by a group of customers you can calculate the average revenue in the post-reward period for the control group customers and compare that value to the target group customers to calculate the percentage age increase due to the redemption. If the number of customers is known, you can now compute the total incremental revenue.

There are many online resources that explain how to compute lift or incremental revenue, but it’s important to understand your unique situation and business problem, discuss the approach with the stakeholders to whom the results will be presented, and take the approach that best fits your situation.

Conducting an analysis will reveal the types of rewards that will likely be most profitable and what makes sense for your company and for your customers. Once you have made that determination, you may want to begin to do more broad market research into the different reward types to see if you can identify any trends. You may also choose to create a customer profile to determine the most profitable types of rewards and to understand the types of customers that take advantage of these rewards. Your findings will help to inform your customer acquisition strategy.

You Know Who Will Respond…Now What?

As with any analytics project, understanding what the numbers are telling you is not always easy. It is entirely feasible to come up with varying results based on underlying assumptions. Once you have put all of the work into understanding who is participating in your organization’s loyalty program and how that is affecting their decisions as a customer, be sure to find a partner to help you fully realize the potential of the program you have in place and to keep your customers coming back for more.

The Urgent and Important Work of DevOps

“What is important is seldom urgent and what is urgent is seldom important.” – Dwight Eisenhower

There are things in our lives that are urgent and important, urgent and unimportant, not urgent and important, and not urgent and not important. The idea is, you first want to get your urgent and important things out of the way, then put a lot of energy into important but not-so-urgent priorities. DevOps adoption, as it’s currently being practiced, tends to focus on not-urgent or not-important problems – and sometimes both.

Some Background:

When the term “DevOps” was first introduced in 2009, the largest barrier to adoption was awareness. Managers and leaders didn’t know what DevOps was, much less why they needed it. As an industry, we tackled this problem with two main strategies: we appealed to the unicorns (i.e., Amazon is doing production deployment every 11 seconds), and we built grassroots proof of concepts. At the time, this was both urgent and important. Urgent because our competition was reading the same articles about the industry leaders, and important because the first step towards solving a problem is to recognize it and name it.

Seven years later the unicorns have all been spotted, rounded up, studied, analyzed, reported on, photographed and imitated.  So why do we still feel that it’s important to drive awareness? Certainly there is still a good deal of uncertainty about what DevOps really means, and this needs to continue to be refined.  But the awareness is there. IT VPs are beginning to put DevOps initiatives in their budgets.  The elusive “DevOps Engineer” is a hot commodity for enterprise recruiters. Go to any technical conference, and there’s sure to be a session on DevOps for developers. Undoubtedly, many organizations have at least one team that can showcase automated builds, tests and deployments. It has now become urgent and important to begin realizing the gains that have been promised by all the grassroots champions.

It’s Not About the Tools:

Any student of the DevOps movement will tell you that it’s not about the tools – it’s about building the culture. And yet, everyone is organized around building and leveraging those tools. We are seeing teams that are able to check in code and automate processes all the way to deployment into QA, but then things stop. The change advisory board is still there.  IT security is still there. Ops is still on edge about frequent deployments. But let’s not forget what the whole point of all of this is: to deliver business value quickly. It is now unimportant if you go from 1 build a day to 10 builds a day, if you’re still sitting at 1 deployment per quarter.  It’s not urgent to lower your WIP from Dev to QA if you’re letting it all stack up on your staging servers. Clearly the important work is to address the culture, relationships, security, and change control processes that have previously been stuck in a not-urgent/not-important quadrant.

DevOps Graphic

What You Can Do:

This shift towards focusing on non-technical matters is hard for developers. When we’re staring at our screens thinking, “what can I do to make this better?” we usually come up with technical answers like writing a script to automate a process, or creating a tool to assist with build metric collection. These are great ideas, but a better answer may be to go find an ops guy and take him out to lunch or to sit in on a release management meeting to better understand why getting through production control is so difficult.

Rule of Thumb:

If you’ve already developed some automation, then do not favor more automation over relationship development with operations. I witnessed one mid-sized company focus entirely on agile development practices and build automation without ever addressing the production control variable. Sprint velocity and story acceptance was high.  Builds were automatically deployed to development servers on code check-in, and were push-button deployable to QA servers when QA was ready. All final binaries were automatically packaged up for deployment without any developer involvement. Yet, during a 12-month period not a single line of code was deployed to production. Clearly automation didn’t provide any business value here.

Conversely, I worked with a Fortune 500 company that was struggling with large, complex software releases. Big-bang releases would frequently be rolled back, causing the deployment complexity to continually ratchet up. Some rollbacks were the result of configuration or usage issues, rather than bugs. Failed deployments became a self-fulfilling prophecy. While developers were improving the build pipeline no new build or release automation was put in place to mitigate the deployment issues. Instead, the problem was solved by establishing a better relationship with our operations partners. We figured out that if we had the right players involved, we could collaboratively deploy smaller incremental releases to a limited production audience. This allowed developers to work side-by-side with end users to quickly certify changes and develop fail-forward plans in the event of mistakes. Release frequency went from a monthly ordeal with regular rollbacks, to multiple weekly deployments with nearly zero rollbacks.

It is time for the development managers and directors to start playing a more active role. Managers need to recognize that to fully realize the benefits of DevOps, they can’t simply delegate all the work down to the technical staff. Someone needs to champion the organizational changes that need to occur to get development, operations, QA, change management and security all working to take the same hill. Someone needs to start the grassroots work of building the organizational tools (not technical tools) to get the products through security and release management teams. These are the highly important, and increasingly urgent tasks that are needed to solve the DevOps puzzle.

When Customers Attack! Engaged To Engraged In The Blink Of An Eye

Looking to make a few updates to your loyalty program? There are a few things you should know… If you’re not careful, there could be a volcano of negative sentiment waiting to erupt in the form of your customers.

Just ask Starbucks who recently announced this past month that it was changing its loyalty program structure from a frequency-based model to a model based on what customers actually spend instead.

For those of us who eat, sleep, drink, and live loyalty, announcements like this are never that much of a surprise. In fact, Starbucks’ recent announcement harkens back to the changes Southwest made to their frequent flyer program circa 2011. What seems to catch some companies off guard though is the customer backlash that ensues.

You’ve seen it, maybe even had a similar experience. It’s when your fiercely loyal customers turn on you and go from engaged to what I’m calling enraged by spouting mad declarations through social media, showing how their ex-favorite company has ripped their loyal heart out and eaten it… Raw.

So what is engraged? My definition: When a loyalty program member cares so deeply about the program and its brand that any unilateral change to the program’s core structure is viewed with outrage and a feeling of downright betrayal.

Seems fitting, right?

Hyperbole aside, for most companies, the decision to alter the foundational structure of a rewards strategy is a long time in the making. However, you can minimize the damage done by your “engraged” customers. The key lies in your mindset, advanced planning, and strategic over-communication well in advance.

Below are a few pieces of advice you should consider if you’re planning changes to an existing loyalty program or strategy you have in market today.

Don’t Panic – Your Strategy is Sound

If you’ve done your due diligence in coming to the difficult decision to alter the fabric of a loyalty strategy, you’ve probably worked through a number of scenarios and put in the “hard yards” to get to that decision. You can take comfort in that fact and remember that the ensuing backlash will lessen with each passing day. After all, Southwest certainly lived to fly another day after their program changes. I suspect Starbucks will too.

The Right Customers Will Adapt

A change in a program structure like the aforementioned can also serve as an opportunity for you to indirectly ask for a “hand raise” from your customers. Customers will choose with their actions whether they’re still in, or whether they choose to no longer do business with you. For a lot of companies, that’s scary. Some companies are reticent when it comes to recognizing the belief that all customers are NOT created equal. Those companies should remember though that the best loyalty programs – especially rewards-based ones – should be designed with a best customer in mind, not every customer. And sometimes, the definition of “best” can change and doesn’t always equal “most frequent.”

Fail to Plan, Plan to Fail

As a consultant, I often tell clients that the best time to craft your program’s exit strategy is before you even launch it. The same principle can be applied to situations where major changes are being made to the rules of a program structure. Realistically, the announcement of changes should be made at least a full 6 months AFTER the decision to make the change is made.

Why? That’s how long it takes to develop the right communications plan, analyze the legal (if any) and financial risks, and prepare and train your frontline employees to handle question and inquiries. If the plan is simply to issue a press release and make the change, then you should revisit it. It’s critical to be methodical and to take your time, or risk one of the worst case scenarios you’ve thought about come to pass.

Put the Customer Center Stage in Your Planning    

All planning you do around communications, timing, and risk should be done with your mind squarely focused on the customer backlash. Looking at worst case scenarios and planning for them will help to minimize the impact such an announcement can have financially on your company’s bottom line. In essence, you need to be ready for whatever comes your way. Part of this planning is determining the size of the bone you can throw disgruntled customers when the inevitable complaints arise. That size should be determined by customer value and your frontline employees should be empowered to address complaint situations.

Communicate Early and Often

If you drop an atomic bomb on your customers’ loyalty, you can expect them to go nuclear. Ideally, customers should be notified that changes are coming at least 6 months before the effective date with subsequent communication streams aimed at promoting any new program benefits and helping customers realize the impact the changes will have on them. Your messaging should be as devoid of “spin” as possible, but still should accentuate the more positive reasons for the change (tied back to the customer) and should definitely highlight any new features or benefits that are also getting introduced.

For instance, Southwest highlighted how the new program structure would save the customer from having to endure expiring credits. The new points program had no expiration so long as a customer remained active. They also highlighted ways to earn even more based on new fare types and promotions.

Provide the Engraged with a Platform to Complain

When angered customers are left to their own devices, that’s when perceptions of a brand can tank. In one fell swoop, customers can show that not only did your company devalue the program they once loved, but now you’re being unresponsive to their complaints. The way to circumvent this perception is to provide them some recourse or platform to lodge their complaints. Take caution however – the expectation will be that a response will be provided. If you let them complain in a one-sided dialogue, then risk the perception of your brand getting even worse.

Generally, customers are hearty, adaptive, and most of all, not stupid. Loyalty programs have been around long enough for them to know that sometimes changes need to be made. If you take the time to remain positive about your decision, plan carefully, and most of all communicate strategically, you can minimize the impact sweeping changes may have and at the same time reaffirm your commitment and loyalty to your best customers.

Staying Relevant in IT: How We Foster a Collaborative Learning Environment at Daugherty

While there are many things that I love about my job at Daugherty St. Louis, there’s nothing quite like helping a developer grow in their craft. If we’re honest with ourselves, one of the toughest parts of our job is staying relevant. Technology is moving so fast, and it takes a community of caring individuals to help inspire, motivate and teach to stay on top of our game.

That’s one of things that I’m so proud of here in the Software Architecture and Engineering Line of Service. We all motivate each other, teaching and learning from everyone, to keep driving technology forward and solving some of our client’s toughest business problems.

With that, one of our goals as an organization is Alignment. We try to understand where the technology is going, what our clients are implementing, and ensuring that we train every one of our consultants to stay relevant. One of the more entertaining ways we do this is through our Annual Summit.

It’s a full day event where our teammates can come together for learning, great discussion, fun, and camaraderie. Nearly 70% of our team packed our two largest conference rooms on a Saturday for 10 distinct classes. Who says learning can’t be fun? With our passionate group of individuals, the Annual Summit is one of my favorite days of the year.

Classes this year included Responsive Web Design, Intro to Microservices, Cloud Development, and Internationalization as well as thought leadership topics like Dev Ops, Bi-Modal IT, Agile in the Real World, and Solution Architecture. Daugherty employees were able to select between classes that were newly developed for the Summit or that they were not able to participate in during our normal round of Lunch-n-Learns and evening classes.

It wasn’t all lecture and discussion… Half way through the day, the teams broke into smaller groups to participate in a team-building event. They built a deck of “cards” that was quite impressive. In fact, we were all amazed at the ingenuity of our employees.

It wasn’t all work, either. After a full day, we hit the lanes for some bowling. When you work with clients all across the St. Louis region, our team gets fairly spread out. Events like our Annual Summit, regular hackathons, and Daugherty social events remind us just how important it is to continue fostering an environment of collaboration, respect and learning opportunities for our people.

I would encourage anyone who is a developer or a manager, make sure that you never lose sight of the importance of teamwork and team building. What can you glean from those around you? Take a break, check out some training, have a discussion with your teammates, and never stop learning. (Which goes for managers, too.) We’re never too old to learn something new.

So, what’s your environment like? How can you foster more collaboration and learning among your peers? Share your thoughts with me in the comments, and of course, I’d love for you to consider joining Team Daugherty if your environment isn’t quite what you’d like.

Photo Credit: Dave Oleksa via Twitter

 

Organizational Project Management (OPM) Trends Upward

“Without strategy, execution is aimless. Without execution, strategy is useless” – Morris Chang, CEO of Taiwan Semiconductor Manufacturing Co.

When you think about a project, it makes sense that all projects should be enablers to reach organizational goals. If not, why would a company work on that project? More and more, companies are making an effort to ensure this occurs, but it’s not exactly as easy as it sounds, especially within large corporations.

Within our own clients, companies are looking to reduce the chasm between strategic planning and results, and the solution for this is good execution. This only happens when you can translate organizational strategy into a portfolio’s components (i.e., programs, projects, operations, initiatives, etc.) and align them to overall strategy. Below are a few great resources for helping you with this whether you’re a large organization or just getting started in your career as a project manager.

Research Available:

In the last five years, PMI (Project Management Institute) conducted surveys in government, for-profit and non-profit organizations to identify their approach on how their organizations are aligning their portfolios with the organizational strategy and what they have done to improve execution. The results of the survey have been published in different issues of Pulse of the Profession.

Guides:

There’s also a new guide that the PMI developed: Implementing Organizational Project Management: A Practice Guide. This guide helps executives and project management practitioners to envision the alignment and execution that can create the required synergy to produce the products and benefits to achieve strategic goals. It’s everything you need to meet or exceed the stakeholder’s expectations.

What are the real applications of this?

One great example in the public sector on how project portfolios were aligned with strategy to enable success is the City of Frisco, Texas, which has been named “The Best Place to Raise an Athlete.” To keep achieving that vision, it has run several projects partnering with the U.S.’s major sports leagues and teams to create a thriving sports-related market. The City of Frisco secured the right capabilities – not available in the public sector – and implemented them in the right place at the right time throughout partnerships with external sources that share Frisco’s vision. Today, the stadiums and arenas used by professional teams are surrounded by sports academies that help young kids to develop tomorrow’s athlete.

In the private sector, organizations like 7-Eleven do not limit their employees and executives to staying in the same position forever. As an example, their CIO held several leadership positions from logistics and merchandising to operations. His strategic and business acumen helped him to select and prioritize the alignment of IT projects with business needs to gain full support from stakeholders and to execute and prioritize business initiatives through innovative technology solutions.

Key Takeaway:

Organizations can take advantage of those combined skills in business and strategic management, and versatile resources are driving their success. As business evolves, there is a great opportunity to elevate your value as a strategic partner and be more successful as an organization.

PMI identifies these versatile resources as the PMI Talent Triangle. It’s the ideal skill set combining technical, leadership, and strategic and business management expertise. This is what we’re seeing within our own clients at Daugherty and how we’re helping many of them be more successful.

Does your organization have an approach to align and execute portfolio components to consistently and predictably deliver corporate strategy that produces better results? If you’re a project manager, do you have the additional skills to stay competitive in the ever-increasing complex market? With the right capabilities and planning, this can make a significant difference in your success.

Yes – You Can Embrace Bi-Modal IT and Be Better for It

As the Digital Business Revolution rages on, we’re continuing to see businesses struggle to keep up. This blurring of the digital and physical world doesn’t show any signs of stopping, either. With the continued creation of new business designs, how does one balance the tried and true method with the exciting opportunities of innovation?

The problem that we see quite often is a misalignment between the dollars spent and the value returned. Gartner’s Pace Layered Application model shows us that IT is spending the preponderance of their budget on systems of record, even though it is the systems of Innovation and Differentiation that provide the most value and alignment to the business strategy.

This is where Bi-Modal IT comes into play, which is a term Gartner uses to describe the dual set of capabilities, activities and deliverables. It’s one of the tactics used to try and right size this conflict by dividing IT into two distinct paradigms.

bi-model-it

Take a look at the graphic above and think about this – New business models are arising daily, and it takes a different thought process and structure to keep up with the speed of change (Mode 2 on the right). For example at Daugherty, there is still a need to operate in Mode 1 (on the left), providing the rock solid reliable delivery necessary to ensure our customer’s customers are getting the value they expect. Our 30 year history is built on providing valuable, predictable, end-to-end project delivery. But, that does not conflict with our embracement of Mode 2 for systems of innovation.

For our clients, we have helped several of them move to a more agile, product-based way of thinking, where rapid deployment and feedback can provide a competitive advantage and deeper customer loyalty. This starts with a deeper alignment between the business and IT. When this happens, you can truly measure success and see the desired business outcomes that stem from this type of partnership.

The truth is that innovation is going to happen, and it will happen with or without IT’s knowledge and guidance. As we can all likely agree, IT should be the one providing the guidance and understanding of where and when to use cloud, PaaS, and SaaS, and providing the landscape and the ability to rapidly prototype and stand up proofs. However, the business has tools at hand to act as citizen innovators that are far more powerful than the old Shadow IT of the past.

While change can feel risky, an IT that embraces Mode 2 will be better positioned to ensure feasibility and sustainability of systems when prototypes and pilots turn into production systems. And yes, you can accomplish this without leaving behind the delivery excellence of Mode 1. With a Bi-Modal IT structure, it allows the business to be a partner with IT, not competitors.

Here are some things to keep in mind for embracing the culture of innovation, or Mode 2:

  • Failure is an option, and failing forward is a valid plan
  • Value driven decision support and alignment out weighs other factors
  • Consistent teams will have a sense of ownership and accountability
  • Focus on brand, experience, sentiment, and loyalty
  • Business driven, in many cases, exposes new tools and techniques
  • Be risk tolerant
  • Be Mobile, anywhere at any time
  • Low, organizational governance
  • Continuous learning

 

 

Agile in the Real World – Is It Right for Your Organization?

At Daugherty, we probably hear about Agile Development more than any other topic. Common detailed questions such as: How do I get started? Should I use scrum? Can my PM be the scrum master? Then, there’s also the other big picture questions about the nature of migrating from a waterfall or an iterative approach to full agile.

Daugherty believes that there is no one answer that fits perfectly into every environment. The truth that everyone must understand is:

  • Agile is not a magic bullet that is going to solve all of your IT dysfunctions
  • Agile is not a set of ceremonies and artifacts, but is a cultural discipline that demands a change in thinking as well as process

Let me start off this conversation with the question we should be getting asked. Why Agile?

Agile Construction provides key benefits that will align IT closer to the business outcomes it provides, reduce rework, and provide tangible value quicker to the business. No matter how you design your Agile program, it must have:

  • Rapid Feedback from all parties that leads to less rework, highest value construction, and better definition of the deliverables
  • Embedded Q/A and continuous testing (we are going to be ruthlessly refactoring; we need to make sure we aren’t breaking the things that used to work)
  • Tight, well-defined relationship between the business and IT, dedicated product owner(s) that is willing to guide the teams through all decision and feedback stages
  • Empowered teams that are allowed to self-select, self-drive, and self-commit to the work being done (along with empowerment comes responsibility and accountability)
  • Faster release cycles, getting the high value, completed items in the users hand as quickly as is feasible
  • Prioritized risk evaluation, making sure that architectural or technical risk is addressed before functionality is fully developed

An Agile program can leverage Scrum, XP, Adaptive Software Development, or any of the other Agile frameworks, as long as first and foremost you keep the above disciplines in place.

So to address the bigger question… Should you go Agile?

Let’s look at what it takes to truly make agile work. First and foremost, you need a committed business and IT alignment, where value, funding, and outcomes are all in sync. If you do not have this, your first step should be to invest in a Business Alignment workshop, otherwise you may have Agile construction, but you will never have an Agile Enterprise.

Second, you have to understand your current staff. You wouldn’t ask a Java shop to start coding in .NET without training, continuous education, and a coaching plan. Nor should you expect your business and IT resources to be able to pick up the rigor and discipline of Agile without the same level of training and support. In addition, the way you allocate resources, to product instead of by project, may mean an organizational change is warranted, which again can be culturally difficult to accomplish.

And lastly, follow the money. Is your organization willing to make decisions without full knowledge of cost and delivery date (many would argue that we never had this and that we were just kidding ourselves in Waterfall)? Daugherty believes that some upfront work is necessary, that a level of scoping and overarching requirements should be rapidly gathered and allow for the final feedback driven scope to emerge. Emergent architecture and emergent standards should be avoided where possible. Agile works best when Architecture and Security are provided as up front guidance, followed by just-enough governance, to business and construction teams.

We, ourselves, run an agile shop in our Dev Centers and have several scrum masters and certified SAFe experts across our team. We’re helping clients through agile workshops and coaching, Rapid Process Mapping techniques (the quickest, most accurate method of getting 80% requirements), bringing Agile programs to scale, and even Bi-Modal IT and Application Pace Layering. Let us know when you’re ready to talk agile. Team Daugherty is ready to help you.