Monday, September 08, 2014
I also use SignalR for real time communications within my app. While, I could use Ajax Long Polling, WebSockets are a tad bit faster.
Wednesday, June 25, 2014
|My Certificate of Completion|
So why do training organizations give out attaboys? I don't think they'll help you get a raise or even a job. I think it's because you (or your boss) shelled out mucho dinero for the courses and they want something tangible to show for it.
All this got me thinking about how pats on the back can actually breed (and encourage) mediocrity in an organization.
Wednesday, May 14, 2014
|Scary dialog box that popped up.|
|This was most definitely not a false alarm!|
I was a bit late to the game with regards to having a high memory ceiling. Prior to three years ago all of my workstations only had 8GB of RAM. I've since upgraded all of them to have 16GB of RAM and it's been really great. I've been loving it. I'm able to do so much more at once. Have hundreds of tabs open in Chrome. Everything just zips along nicely.
As you can imagine, not having had to deal with hardware resource constraints has been super awesome.
This event got me thinking about the way we build software. When I started writing code it was on a Toshiba Satellite Pro 420 CDS: 100 MHZ and 16 MB of RAM. Every byte and cycle counted. To put it into perspective of where we are today in terms of hardware, you could not play an MP3 file on that laptop while doing ANYTHING else. Today it seems most everyone is playing YouTube videos while they are "working".
In a perfect world, everyone wants to write code that is the model of efficiency and optimization, but there is a cost to it.
The Costs of Being Awesome
What are some of the reasons for this? Well, I think it has to do with management of the organizations in which they have worked at throughout their careers.
I believe this to be a consequence of what I like to call the hurry up and ship approach to managing application development. This approach is spring-loaded with praise and reward for speeding up the development process in such infamous ways such as leaving documentation for "later", low balling estimates and making the team work overtime to meet deadlines that were committed to by project managers, skipping unit tests, trimming test cases out of the test plan, not performing regression tests, not profiling your product for performance issues and countless other anti-practices This topic alone warrants a post of it's own.
While the intentions aren't nefarious, they are detrimental both to the morale of the team and the health and success of your product. As
The point is, you think you're saving time (and therefore money) by getting your product shipped faster, you're actually mortgaging your future. You'll be paying down the technical debt incurred by poor implementations of hastily designed solutions. You'll see your defects and enhancements start to cost more in terms of effort.
When it comes to hurry up and ship, the lowest priority is usually performance testing. Most applications built don't run in a high volume environment and their developers suffer from the "oh we don't need to test for load, we're a small app" mentality. In organizations like this, performance is an afterthought, as in, it's thought about when a performance issue is reported. Then it's forgotten again...until the next customer complains, or a server goes down, or your customers move on to a competitor's product.
I'm not specifically talking about load testing. That has it's own benefits, like identifying concurrency issues or knowing when you'll need to invest in additional hardware as your user base grows. I'm talking about identifying memory leaks from resources you forgot to release and just overall making your apps run faster by identifying where the bottlenecks in your code are.
"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
-- Donald Knuth, "Structured Programming with Goto Statements"
I'm not advocating the optimization of everything, just find that 3% and fix it. Before you release and before it becomes an issue. The longer the issue is in your code base, the greater the chance more code will depend on it in one way or another. Ultimately, this will increase the costs associated with remediation.
At the end of the day, this stuff is expensive. The tooling, the time to identify the issues, the acquisition and retention resources with the skills to find and correct these issues all cost money. All of that money pales in comparison to the costs associated with cutting corners.
What are some of the experiences you've had in places you've worked around (not) profiling and hurry up and ship?
Friday, May 02, 2014
|A blast from the past.|
Microsoft's official stance on Visual Basic 6.0 is:
The Visual Basic team is committed to “It Just Works” compatibility for Visual Basic 6.0 applications on Windows Vista, Windows Server 2008 including R2, Windows 7, and Windows 8.
The Visual Basic team’s goal is that Visual Basic 6.0 applications that run on Windows XP will also run on Windows Vista, Windows Server 2008, Windows 7, and Windows 8. As detailed in this document, the core Visual Basic 6.0 runtime will be supported for the full lifetime of Windows Vista, Windows Server 2008, Windows 7, and Windows 8, which is five years of mainstream support followed by five years of extended support (http://support.microsoft.com/gp/lifepolicy).
Yeah...I was shocked by that statement too.
Why would Microsoft want to prolong the life of of a set of development tools that were released in 1998? Are they crazy? Nope.
There are many large applications out there written in VB6 that are simply too big to take a big bang approach to porting to the C# (the only real choice for a Microsoft shop). I've personally dealt with several projects which fit into this size category. The only logical way to solve this problem while still maintaining active development is to change the tires on a moving car. Meaning add new features and fix bugs during the migration process.
Yeah, so it isn't the best approach, but new features and bug fixes are what keep the lights on in a software company.
During my experiences modernizing existing legacy applications, I've run into various types of features that would be a cinch to implement in C# but needed to be implemented today, not after the migration is complete.
Visual Basic 6 was great back in 1999, but today's applications require functionality such as localization, internationalization, parallel task processing, and stunning user experiences. Guess what? VB6 doesn't do those things very well, if at all.
No you're not screwed, not in the least bit. What is the solution? It's rather simple actually. Microsoft has spent a great deal of time implementing backwards (or depending on you perspective, forwards) compatibility via COM making sure that newer applications can still talk to legacy applications.
What we don't see too often is older applications talking to new applications.
Wait a minute. Why would anyone want to this? I thought the goal was to ditch my legacy app? Yes it is, but we still have a ton of functionality we'd like to maintain and remember, we can't halt development. The customer doesn't care that we need to migrate to C#. They care that the application gets the features and functionality they require to run their business.
Business Objectives vs. Developer ObjectivesOne thing that is almost always at odds in any organization are the objectives of the business and the objectives of the development team.
Developers almost always want to use new technologies, but they usually just can't find a supporting business case to implement them.
I've heard a pitch to convert an entire ASP.Net Web Forms application to ASP.Net MVC 5. While the argument can be made that we want to lower development costs over time by organizing our application better and removing significant amounts of technical debt, the business and it's customer base won't recognize any immediate value from the change.
A more drastic change would be rewriting your VB6 desktop application to be a modern web application. Sorry buddy, it just ain't gonna happen that way. Most applications are simply too large to migrate in a week.
From a technical perspective all of these changes make sense. Perhaps even marketing would be on board as they can now say your flagship product is written in .Net and is modern! Maybe sales would buy into the change as they can sell those paid enhancements for 20% less due to the faster time to market ushered in by lowering development complexities and overhead. The key issue is going to come from executive management. The cost associated with the above mentioned types of changes would be crazy if there wasn't a way to deliver a real value proposition to the customer.
Management's objective is to increase sales of revenue generating products and keep costs to deliver low. A migration doesn't generate revenue until you ship. Even after you ship, you'll have to deal with the headaches that exist around maintaining two separate products that essentially do the same thing. You'll have to end-of-life your legacy product and keep it alive for, uh...well, maybe a decade.
The objective of a developer is very different. Developers want to stay on the cutting edge. They want to use new technologies that make their lives easier. They want to be able to write real unit tests. They want to use a modern IDE. The developer's mind atrophies when they aren't able to put new things they learn into practice in the day to day activities. When developer brains start to deteriorate, you get crappy code, low morale, and all of your good developers WILL go elsewhere.
Striking a BalanceWhat do you do to make both sides happy? Well, you let them implement the new features in modern technologies and weave them back into the legacy application of course.
I'll be discussing how exactly you can achieve this in the next part of this series.
How have you handled migrations in your organization? Let me know in the comments.
Monday, March 05, 2012
Martin Hinshelwood recently made the same mistake I did and blogged about it and how he fixed it.
Luckily, I only ended up with several hundred off-by-one values on my work items. I have about 250 left to go through and manually revert the values.
Bottom Line: Only use Excel to bulk-add work items to your project and you save yourself a lot of headache!
Tags: Team Foundation Server,Microsoft Excel,lessons,Martin Hinshelwood,headaches
Friday, March 02, 2012
I’m currently running an L10N project as a SCRUM master for my employer. This is also one of the first agile projects being run at this organization. I’ll save the L10N lessons for another post as this one is, as the title states, about learning SCRUM.
Those of you that know me, know that I haven’t had any formal education when it comes to software development. Everything I know, I taught myself. After working at a few startups, I landed my first corporate job. I am a quick study and I have filled in many gaps along the way: One major gap being Software Development Lifecycles (or SDLC).
In this series I’m going to take you through my journey of being part of a waterfall-based team to a agile organization.
The first way I was taught professional software development was a waterfall methodology. I was working in an organization that just achieved CMMI level 2. We had a mysterious Software Engineering Process Group that made sure the process was being adhered to via weekly audits, tweaked the process as needed, and probably did more than they let the rest of the group know.
When I first learned waterfall, I said to myself: “Oh my, this makes a ton of sense!”; and it did!
I was young and naïve back then. I really did believe that this was the correct way to make software. I mean, it really was a logical way to get the job done.
Lets take a look at what a typical waterfall process looks like:
- Requirements: Gather the needs of the client
- Design: Figure out how I’m going to meet the requirements
- Implementation: Actually implement the design
- Testing: Test that what I implemented actually matches the Requirements and Design
What is wrong with that? Well, unless you live the waterfall life for a few cycles, it’s pretty difficult to spot.
I came to realize that:
- More often than not, the requirements would always change during design, development, or both. This would impact the business analyst, the developer, and the tester.
- Many times, the deliverable didn’t meet the client’s needs because the requirements were incorrectly captured. This was usually noticed immediately after we shipped and resulted in a mad-dash to release a hotfix. This would impact the business analyst, the developer, and the tester.
- The aforementioned issues caused product destabilization due to the major changes we would have to make in a very short time period. This would frustrate the customer and make the lives of the account manager and product owner much harder than it needed to be.
All of this sounds like a really big mess, but really, we persevered and delivered what we needed to deliver..
The next post in the series will be about all of the things people tried to do in an effort to avoid change.Tags: SCRUM,lessons,education,development,Lifecycle,SDLC,waterfall,team,methodology,CMMI,Requirements,Design,Test,destabilization