Monday, December 20, 2010

Technical Debt

What is Technical Debt -

Technical Debt is a wonderful metaphor developed by Ward Cunningham to help us think about this problem. In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.

So my obvious next step was to find ways to measure technical dept. Here is what i found

Tool to measure Technical Debt -

SQALE + Sonar -

With SQALE, Sonar can now fully embrace the Quality Model world as SQALE is the leading-edge method to assess Technical Debt while conforming to ISO 9126 standard. The method has been developed by DNV ITGS France and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0. You can have a look at the SQALE Method Definition document to get a good understanding of the methodology, but here are the main principles :

* Quality means conformance to requirements, therefore those requirements should be first defined. They should be : atomic, unambiguous, non-redundant, justifiable, acceptable, implementable and verifiable. For example “each method should have a complexity lesser than 10. Those requirements are called rules in Sonar.

* The SQALE methodology assesses the distance to the requirements conformity by considering the necessary remediation cost to bring the source code to conformity. For instance, if the branch coverage of a source file is 60% whereas 65% is required for each file, the remediation cost will be effort to cover the missing number of branches to reach the required branch coverage threshold of 65%.

* The SQALE Method adds up remediation costs to calculate quality indicators. Indeed when you have several debts, does it really make sense to average them ?

* The SQALE Quality Model is orthogonal meaning that a quality flaw appears once and only once in the Quality Model.

More about Squale/Sonar - http://www.sonarsource.org/

Wednesday, December 15, 2010

One of the Most Powerful Debugging Practices

Dzone promoted a link called "One of the Most Powerful Debugging Practices," showing the use of a trap. The example is in C#, but a Java version is offered. Interesting thought - worth it?
Add Trap() instances all over the place to cover all the execution paths and have it throw a runtime exception so that the debugger would pick it up and help you step run through the immediate code path following the trap.
Once the debugging is done prefix the trap with a ** mark so that it can result in a compilation failure so that peice of code can be removed.
So essentially if you use a debugger this sure is handy in having all the execution paths tested, if not then there is little value addition.

Tuesday, December 14, 2010

How much time out of your day does IBM waste?

Blogger Chris Hardin has written up a post about the breakdown of time wasted while using WAS and RAD, a brutal loss of at least two hours while predicting much more (with a lot of "oftens" and stuff.) It makes you wonder: what would the breakdown be with other products?


"
  • 1 hour sporadic time waiting for WAS to deploy after I make a change. On a heavy day, this can be two hours.
  • 30 min - 1 hour waiting for RAD to respond during a garbage collection cycle. You'll know this is happening because RAD will lock up until it finishes.
  • Some days in a month, I am trying to figure out some classpath issue that is specific to WAS. Many standard J2EE application setups, choke in WAS due to WAS trying to favor it's own IBM classpath. This can range for a couple hours to a couple days.
  • 30 minutes a day looking for some setting in RAD that is hidden in 5 different places
  • Often I spent a couple hours looking for some issue that turns out to be an IBM specific problem. A good example is that WAS wasn't allowing my servlet filters to fire in an app so I had to set a property in WAS console to fix it and another time I had to patch WAS because the idgits at IBM decided to make it, by default, look for Web Service annotations and try to create a Web Service and they didn't put in a way to turn it off. Hours and hours of work here.
  • I spend a lot of time looking for settings in WAS Admin console that are easy to find in JBoss or Tomcat."
Yes its slow, but then imagine the runtime plugin's and stuff that get loaded with or without the knowledge of the developer.

One of the comments which sounded interesting

"In my case, I have not had any of the problems the original poster complained about.

We have the WebSphere environment managed with WebSphere ND. We are able to install and propogate on the server farm a new EAR file in less than five minutes. I worked with the operations team to create Jython scripts allowing installs and updates to be as simple as executing a single command on the command line.

I am able to launch RSA 7.5 on my laptop (a dual core centrino) in about two minutes. This includes opening about 20 projects in the primary workspace.

Of course, tools as complex as WebSphere and Rational products are very easy to misuse and thus suffer great performance problems. However, at my company, my team encourages spending time to learn the tools -- taking the necessary courses and following the advice in the Redbooks. As a result, we know how to tune the product, modify the eclipse.ini file, change the capabilities and which components are started by default, etc. It takes time to learn to use complex tools. But, once we learned the tool, we are not spending time cursing it.

Reminds me of the complaints against JPA. When developers misuse JPA, the resulting programs are awful and very slow. Then the developer curses JPA. But, the problem is not JPA, the problem was the developer used the wrong mental model when creating the Entities.

"

So bottomline are we making a U turn and going back to the good old days of VI and textpad's? Sure i can launch a textpad in 2 seconds and stuff in my code and have maven do the build and have it hot deployed onto the server of my choice.

What is your take ?


Top 10 reasons i don't like JSF

Bruno Borges posted the "Top Ten Reasons I Don't Like JSF," offering these reasons along withe xplanations:

  1. Extra step when defining a project's architecture
  2. Fragmented Community
  3. Fragmented Documentation
  4. Component Incompatibility
  5. Caveats on some scenarios because of different implementations
  6. Designers and developers roles mixed
  7. Does not improve usual web development process
  8. Non-functional prototype
  9. Performance
  10. Web is fast, standards are slow

There are various ways to sincerely criticize a technique. True constructive criticism aims to make the tech and the overall development landscape a better place. You'll find this daily on the various JSF mailing lists. It comes in the form of bug reports, feature request and honest discussion.

But this 'criticism' uttered by the Wicket fans isn't criticism, it's blind hate and propaganda. How can criticism be valid when the facts are plain wrong or just too vaguely worded to show any insight of the criticizer?

How often do these hate lists contain nonsense like "there are too many incompatible JSF
implementations like Sun/Oracle JSF, RichFaces, MyFaces"?

What's up with that? How can you criticize JSF when you know so little about it that you just don't get the difference between a JSF implementation and a JSF component library? How serious would you take 'critic' that rants again there being too many Java implementations like the Sun/Oracle JDK, Commons-collections, Quartz and the IBM JDK. Seriously!?

And what about the nonsense about JSF being only about state? Or only being about POST? This is clearly not true. It's like saying Java can only handle unicode and no other character set. It's simply not true. Yet Wicket fans keep putting this in their hate lists. They don't seem to care about whether their items are true or valid, every item on the list is one, right?

In particular they typically like posting outdated information. JSF 1.0 had problems. Real problems. But how long ago has this been? How is that still relevant? Who still complains that Java is bad because it's a purely interpreted language? We have hot spot for ages now. Purely interpreted execution is so long behind us that it's completely not relevant anymore.

All this gives me too the strong feeling this 'criticism' is barely about real users being genuinely displeased by JSF, but about Wicket zealots being shocked by the fact their perfect framework is used so little in practice.

For the record, I really like JSF, but I don't think it's perfect. There still is a lot to improve, just see the long list of things being considered for JSF 2.1. But these hate lists... they are just WAY too over the top to be taken seriously if you ask me.

Thursday, December 02, 2010

Scalable System Design Patterns

Ricky Ho in Scalable System Design Patterns has created a great list of scalability patterns along with very well done explanatory graphics. A summary of the patterns are:
  1. Load Balancer - a dispatcher determines which worker instance will handle a request based on different policies.
  2. Scatter and Gather - a dispatcher multicasts requests to all workers in a pool. Each worker will compute a local result and send it back to the dispatcher, who will consolidate them into a single response and then send back to the client.
  3. Result Cache - a dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
  4. Shared Space - all workers monitors information from the shared space and contributes partial knowledge back to the blackboard. The information is continuously enriched until a solution is reached.
  5. Pipe and Filter - all workers connected by pipes across which data flows.
  6. MapReduce - targets batch jobs where disk I/O is the major bottleneck. It use a distributed file system so that disk I/O can be done in parallel.
  7. Bulk Synchronous Parallel - a lock-step execution across all workers, coordinated by a master.
  8. Execution Orchestrator - an intelligent scheduler / orchestrator schedules ready-to-run tasks (based on a dependency graph) across a clusters of dumb workers.

5 java tips of the day

The decorator pattern is an alternative to subclassing. Subclassing adds behavior at compile time, and the change affects all instances of the original class; decorating can provide new behavior at runtime for individual objects







The decorator pattern can be used to make it possible to extend (decorate) the functionality of a certain object at runtime, independently of other instances of the same class, provided some groundwork is done at design time





If an object is known to be immutable, it can be copied simply by making a copy of a reference to it instead of copying the entire object. Because a reference (typically only the size of a pointer) is usually much smaller than the object itself, this results in memory savings and a boost in execution speed.






Immutable objects can be useful in multi-threaded applications. Multiple threads can act on data represented by immutable objects without concern of the data being changed by other threads. Immutable objects are therefore considered to be more thread-safe than mutable objects.






All of the primitive wrapper classes in Java are immutable.




Wednesday, December 01, 2010

Proxy Pattern

http://www.informit.com/articles/article.aspx?p=1398608


A proxy object can take the responsibility that a client expects and forward requests appropriately to an underlying target object. This lets you intercept and control execution flow, providing many opportunities for measuring, logging, and optimizations.









A classic example of the Proxy pattern relates to avoiding the expense of loading large images into memory until they are definitely needed - avoid loading images before they are needed, letting proxies for the images act as placeholders that load the required images on demand.








Designs that use Proxy are sometimes brittle, because they rely on forwarding method calls to underlying objects. This forwarding may create a fragile, high-maintenance design.






Dynamic proxies let you wrap a java.lang.reflect.Proxy object around the interfaces of an arbitrary object at runtime. You can arrange for the the proxy to intercept all the calls intended for the wrapped object. The proxy will usually pass these calls on to the wrapped object, but you can add code that executes before or after the intercepted calls.







[Article provides an example of using the Proxy pattern to delay loading images until needed, for performance and memory optimization.]
[Article provides an example of using a dynamic java.lang.reflect.Proxy to measure execution times of method calls and log if this is too long.]

Diagnosing Web Application OutOfMemoryErrors

http://www.infoq.com/presentations/Diagnosing-Memory-Leaks

Tips:


Common causes for perm gen memory leaks in a webserver application are registries holding multiply loaded classes from logging, JDBC drivers, GWT, causing references to be retained to the web application class loaders.







Process heap consists of: Perm Gen, Thread Stacks, native Code, compiler, GC, heap (young and old gens).






Class objects are loaded into PermGen






Common OutOfMemoryErrors that are not memory leaks: too many classes (increase perm gen); too many objects (increase heap or decrease objects/object sizes); stack overflow (reduce/alter recursion or increase stack size).






Memory leaks are indicated by steady increases in memory, and more frequent GC; however these can also be normal to the system so just indicators.






Apart from gross heap sizes, different garbage collector algorithms need to be tuned differently. The default is probably a good starting point.






In tomcat, putting a JDBC driver into the WEB-INF/lib directory can cause a memory leak (use common/lib and there is no leak) of web application loaders being pinned in memory - reloading causes the actual leak. Look for instances of the web application classloaders - there should be one per application, any extra are a memory leak (the leaks will have a "started" field set to false). Find the roots and see what is keeping it alive.






Finding the reference holding memory on reloads in tomcat web applications is straightforward, but this doesn't tell you how that reference was populated, for that you need allocation stack traces - which is horrendously expensive, so can only be done in debug mode.