No, I'm not going to talk about Ruby and the huge amount of news about it's possibility to take over the world :) I'm going to introduce you to a lesser known language: D. Yes, I presume it's name has something to do with the evolution from the C language... Anyway, I wasn't really thrilled when someone told me about this new language. I immediately thought "ok, a new language...yawn". However, I was surprised when I looked at this comparison. It just seems too good to be true. My first reaction was "where's the catch ? does it really work?". I never heard of it before, but apparently it has already been on Slashdot and Dr.Dobb's. At first sight, the language specification seems very similar to C or Java, but with the best of both worlds, and more. Why (almost) nobody knows about it ? maybe it's a hidden gem that will appear on the spotlight sooner or later...or maybe it will never catch on, despite its (apparent) quality (It wouldn't be the first). Worth a try...
Friday, October 21, 2005
Tuesday, September 27, 2005
Can you guess it ?
If you remember my last post, I used an UML diagram to explain the overall architecture of Acegi Security. Can you guess in which tool I made it ?
UML is one of the most well-known standards in the IT world, and many tools have been created to help us design those nifty diagrams. The first tools I met were the full-blown commercial products like Rational Rose or Together. And although they really are good products, with lots of features, it always seemed to me that they were a little fatter and heavier than they should. Is it really that difficult to create a small tool that models UML diagrams ?
Then I discovered Dia, which does a reasonably good job, and is completely free. But it's not an UML tool, it's a general-purpose diagram creator (much like M$ Visio). That can be an advantage, but it also means that certain features, typical in UML only tools, are not present: code generation and reverse-engineering. Although I rarely use those features, sometimes they can be very helpful. Overall, I think it's a good piece of software. However, I never enjoyed how the end result looked like.
So, after Dia, I tried ArgoUML. I was impressed by the different kind of UML diagrams it allowed, including the sequence diagram which is normally the first to be absent. I was just starting to explore ArgoUML when, by chance, I discovered Umbrello. Umbrello also has all the UML diagrams (at least all I can think of), does code generation and some reverse engineering (not tried it though). Normally this kind of tools have only one programming language in mind (when generating code or reverse engineering). Umbrello allows code generation for: Java, C++, JavaScript, Python, Perl, PHP5, SQL and a few others (reverse engineering allows only for C++). So what is the difference to ArgoUML ? ArgoUML also allows code generation to more than one language (although not as many as Umbrello), but I didn't find an option for reverse-engineering. Diagrams aside, ArgoUML has a few additional features relating to project handling and overall it seems a more complete and also a more complex product. In terms of usability I think Umbrello takes the lead: the interface is less cluttered and the overall program is lighter, faster and easier to use. And even comparing to Together, from what I can remember when I used it a few years ago, I prefer Umbrello for it's usability.
So, in conclusion I think both ArgoUML and Umbrello are excellent UML tools, that should be considered before blindly buying the full-blown commercial products.
And the answer to my initial question is: Umbrello...never leave home without it ;)
Thursday, September 15, 2005
Acegi Security Introduction
Lately I've been busy learning and trying out the Acegi Security System, which is a security framework for the Spring Framework. It's not the easiest framework to grasp, but if you have the time and will, it will pay off. During the process of learning it, I created a diagram to help me understand, and now I decided to create a small introduction around it.
Remember, this is just an example of a possible Acegi configuration. Acegi Security is very flexible and has a lot more features (like single-sign-on and remember-me services).
Introduction
For simplicity sake, there's no need to declare all filters on the web.xml. You can use FilterChainProxy which encapsulates a group of filters and executes them in order. As you can see in the picture, we can have 3 filters lined up. Ignoring the HttpSessionIntegrationFilter for now, the other two, AuthenticationProcessingFilter and SecurityEnforcementFilter, implement Authentication and Authorization, respectively.

Authentication
Authentication is the process of confirming that a user is who he claims to be. Typically, this is done by supplying an username and a password (in Acegi concepts those are called Principal and Credentials, respectively). In a web environment, normally, the username and password are provided as a result of an HTML form submit. This submission is captured by the AuthenticationProcessingFilter (it analyzes the URLs passing by, looking for the configured string, that is passed by the form's "action" property) AuthenticationProcessingFilter then calls AuthenticationManager, which starts the process of authenticating. First step is to find the user in some repository, based on the given username. Next, the credentials are compared, and if they are equal, authentication is successful.
The base class in this process is the Authentication class. It's a simple bean composed of a Principal, Credentials and a few Authorities (this normally means user "roles"). AuthenticationProcessingFilter creates a new Authentication Object with the specified username and password, and if authentication succeeds, at the end of the process the Authentication Object will be filled with some Authorities.
But first, we need to find the claimed user in the system. There can be a variety of user repositories in the system (database, ldap, etc). Each of these repositories can be accessed through an AuthenticationProvider. The role of the AuthenticationManager is to maintain a collection of these providers.
Acegi has a few different types of AuthenticationProviders, but probably the most used is the DaoAuthenticationProvider. This AuthenticationProvider uses a DAO (Data Access Object) to retrieve the user from a repository (typically a database). You can use the provided JdbcDaoImpl to connect to a database through JDBC and retrieve the user information, without writing any code. However, is quite simple to implement your own Dao (I did it because I was using Hibernate): just one method that retrieves a User based on a username.
Authorization
Authorization is the process of protecting resources by only allowing those resources to be used by users that have been granted authority to use them. SecurityEnforcementFilter is the filter responsible for the authorization process. It uses a FilterSecurityInterceptor to identify the resources that need to be secured. It holds a collection of patterns (regular expressions) and authorities (an Authority is normally just a string representing an user role). When an URL is identified by one of the patterns, the related authorities must be present in the Authentication object (the one that was created by the authentication process). If there is no Authentication object (the user has not logged in yet) or if the given Authentication does not have the authorities the resource requires, then the application is redirected to a configured AuthenticationProcessingFilterEntryPoint. This normally just represents an URL to a login form.
Now for the first filter, HttpSessionIntegrationFilter...this is responsible for integrating the Authentication object with the HTTP Session. The authentication and authorization processes talked before don't really interact with the HTTP Session. They aren't even aware there is such a thing. They use a SecurityContextHolder to retrieve a SecurityContext, which contains the Authentication object. When a request is made, HttpSessionIntegrationFilter loads the SecurityContext from the Session and stores it in the SecurityContextHolder, where all subsequent filters can access it. At the end of the request, the SecurityContext (that may have been altered during the request) is stored again in the Session. This kind of design is very flexible... all the authentication and authorization processes are independent of HTTP session handling. Thus, we could provide a SecurityContext in any other way. HttpSessionIntegrationFilter it's just a thin layer on top of the rest of the Acegi Security System.
Monday, August 15, 2005
Video editing in Linux
As usual, when on holidays, one must fill endless video tapes of friends and relatives doing funny stuffs (and boring stuff also). Next we have the task of putting them to DVD. It's not hard, but it took me more time than I expected.
First, we have to capture the raw movie from the camera. I already have a DV camera, so that part is plain easy. I could use command-line dvgrab, but I preferred Kino.
Kino allows for capture (with full control of your camera....you can pause, play, etc from the program) and saving to a variety of formats. For me, I was interested in the DVD format. Kino does this job pretty well, and it also has a few additional tools for some editing, like moving and deleting scenes. I haven't tried those though. Kino can, thus, do the entire job. Unless you really want to express your creativity and apply some fancy effects...or a more professional-like editing.
You guessed it, I wanted to express a little of creativity...and for that I used Cinelerra. Cinelerra has lots of advanced editing options (more than I will ever need) and some cool effects...like chroma key...you know...when those pretty girls tell the weather on TV ;)
Anyway, I didn't want to go that far....just add a few titles and images along with the movie. Cinelerra's manual is big, but it explains well all that you need to know. However, you can try this tutorial first, for a quick start.
So, in conclusion...I captured my DV movie with Kino, saved the movie as QuickTime (this is needed for Cinelerra) and made some editing in Cinelerra. After Cinelerra rendered the movie, I imported it again in Kino and converted to DVD format.
A few tips I learned the hard way:
- Select the option in Kino for splitting the captured movie in multiple files. This makes it easier for editing afterwards, because of Kino's smart auto-split: it detects when a scene ends and another starts. It's also lighter on the machine. I tried working with a 13Gb file in Cinelerra and it kept crashing.
- Don't forget to set your movie properties (video size, pal/ntsc, etc) in Cinelerra. It doesn't ask, and if you set it wrong, you won't get good results.
- When rendering the movie in Cinelerra you have a few choices for the output format. The only one I could make it work back in Kino again was Raw DV.
- And finally, remember that this takes a lot of disk space. 1 hour movie equals more or less 13GB...multiplied by two because of the cinelerra's rendered Raw DV format...and the DVD itself..:)
Have fun!
Sunday, March 13, 2005
Flash user interface
We usually identify Flash as a language for making web animations, with or without interaction. Even games...And although Flash has given us, also, extraordinary examples of user interface creativity, its applicability for creating user interfaces is far from being the mainstream. Flash is not very easy. I'm no expert, but normally, we have to create the entire look from scratch...and although that brings flexibility it also takes time and is more suited for artistic designers (or at least someone with that capability). I, for example, couldn't create a better interface than the one HTML gives me. So I stick with HTML.
However, if we could specify a Flash interface in XML things would become a lot easier. That's what Lazlo creators thought. I already tried it and I think they made a very good product. With Lazlo you get a consistent look of windows, dialogs, buttons, sliders, etc. It clearly resembles a desktop application. You do not have to worry too much about the look (it's already great) and just create XML for the interface and a few JavaScript for animations and other dynamic content.
Another feature I think it's great is that Lazlo works with data in XML format. This means that if you want to populate a table, you can easily map the XML with table columns. Also, you can get this XML from a web service very easily. Try the demos and have a look at the interactive tutorial...you'll probably be amazed.
One last thing, it's free and has better documentation than many commercial stuff that I've seen.
Friday, December 24, 2004
Use the patterns, Luke
Patterns are good. Everyone agrees with that...which doesn't mean that everyone uses patterns. But even worse than not using patterns is using them wrong. Let's imagine you want to create a Singleton, but you implement it wrong. Because it's stated that it is a pattern we assume it is right...we know what it does. This makes it hard to find the problem...the same way we don't test if Sun's API is right, we assume the pattern is correct. And probably it will take some time till someone finally finds it.
An innovative approach to help in this problem is PEC (Pattern Enforcing Compiler). What it does is check if your implementation of the Design Pattern (Singleton, for example) is correct, at compile time. It's very easy to use. You only need to add the following to your code (copied from website):
- Add "import pec.
. .*;", e.g. "import pec.compile.Singleton.*;" - Add "implements
", e.g. "implements Singleton" - Compile with PEC
So, in conclusion this can be a handy tool if you adhere to PEC's implementation of patterns or you have the time to extend PEC and create your patterns.
Tuesday, December 14, 2004
Direct Web Remoting
Have you seen Google suggest already ? It's a new service from google (still in beta) that suggests words as you type (similar to auto-completion in popular programming editors). I don't know the details of how it is done, but I presume it uses something similar to DWR.
DWR allows client-side JavaScript to call server-side java, without a page refresh. Although I don't like JavaScript that much, I have to admit that with DWR we could create extremely flexible user interfaces. It's still in alpha stage, but at least for me it worked pretty well (Firefox, of course). The way DWR works is quite ingenious. It creates an IFRAME which calls a servlet that does our work. On reply, the IFRAME has an "onload" script that returns the result to the JavaScript. Thus, the IFRAME is used just for the request and response process and is immediately deleted.
Don't know if it will catch on, but with all the different rich client frameworks emerging, it will be a difficult fight for all.
Sunday, December 12, 2004
XML-based GUI
I recently had to develop a Swing application. I touched Swing only a few times before. And because of that lack of experience I didn't want to do it manually. It would take much too long (and is quite boring too). XML based GUIs seemed a valid alternative. A little research and I ended up choosing SwixML.
With SwixML you can define all the user interface in a single XML file. This has several advantages: you clearly decouple presentation from business logic, it's easier to code and the resulting XML file has a lot less lines than the corresponding Swing code. Also, for those who know the Swing API well, transition to SwixML should be smooth. All Swing components have a corresponding XML element with the exact same name. Properties also have the same name. You can think of the XML structure as a complete representation of the Swing GUI components. This is also a clear advantage for SwixML developers: there's no need to explain what the elements and attributes are. A link to the Swing API suffices (after code-reuse, we now have documentation-reuse).
Be sure to check the samples page to see the simplicity of the XML structure.
Saturday, November 20, 2004
AOP Jargon
Aspect Oriented Programming (AOP) introduced a lot of new jargon (as if there wasn't enough already). And the problem with AOP jargon is that it's not very intuitive, and I think this scares away some people. Because, in fact, AOP is not that complicated. I've been exploring Spring AOP, and I'll try to explain some of the AOP jargon used in Spring (most of the concepts are the same in others AOP implementations).
The most common example associated with AOP is logging. Logging is something that we have to do across a lot of classes (if not all). And most of the times the code is the same, across all those classes. Wouldn't be nice if we could just say: "let there be logging in all these classes". Well, we can. That's AOP.
Let's assume we want to log everytime the foo() method is called.
public class AOPTest {
public void foo(){
...interesting stuff...
}
}
The normal solution would be to add something like System.out.println("Entering method foo"); at the start of the foo method (or something a little better than System.out.println). With AOP we can tell that a given piece of code is executed before any call to foo(). That piece of code is called an Advice. There are different kinds of advices for several situations:
- before advice - executed before the method
- after advice - executed after the method
- throws advice - executed if a specific exception is thrown
- around advice - intercepts the method, and can be used to change arguments or even preventing the method from being called
- introduction - adds methods to the class, allowing it to implement another java interface
This should be enough, but Spring implements AOP trough proxies. Every method call is intercepted by a proxy class, before getting to the real method. This is how Advices are executed transparently before or after some method. Fortunately, you don't' have to create proxies manually for every target class (although you can, if additional flexibility is required).
I think this is all the jargon one needs to know to get started on Spring AOP...or any other AOP implementation. I used Spring AOP mainly because I was already exploring Spring Framework, but there are others more powerful that Spring AOP, like AspectJ or AspectWerkz.
Wednesday, November 10, 2004
3D collaborative environment
For a long time many projects have tried to create a functional 3D desktop. Every project that I've seen fails at one point: performance. However, some interesting ideas appear now and then. Recently I've discovered OpenCroquet. It's not a simple 3D desktop. It's more like a 3D world. Croquet is a collaborative 3D environment that allows applications to be shared between several users. And almost any application can run inside Croquet (no need to program it specifically to work in Croquet). Croquet's architecture allows those applications to be shared in a multi-user environment. You can see the application another user is using, and you can even work with him in the same application (see the screenshots). That allows teams far apart to work together in a project.
OpenCroquet is still in early development, although there's already a version for download.
Tuesday, November 02, 2004
XML shorthands
XML has become one of the most well established standards in computer technology. It's being used in almost all kinds of applications. I won't talk about its great features and advantages...much has been said about that.
However, not everyone is fully satisfied with XML, specially with its verbose syntax. This can make the document hard to read and create. I don't usually have to create large XML documents by hand, but if I'd have to, I probably would have the same feeling :)
Some people got tired of this and created alternatives to XML. The objective is not to substitute XML, but to provide a way to write XML documents more easily. So, all of them offer an easy way to convert the document to full XML. You can have a look a some of those XML shorthands in this comparison. As you can see, their syntax is similar and most of the times inspired by Python: nested elements are identified by identation (at least SLiP and SOX). I personally never liked this (seems a very dangerous world). But I have to admit that a few of them really are simpler than XML, and thus can make it less painful to create large documents by hand. Others (PYX and SXML), however, seem a lot harder and confusing than XML itself.
So, are these XML shorthands really helpful ? In my opinion, if you create/edit large XML documents by hand regularly and liked one of those alternative languages, it can be an good solution.
Wednesday, October 27, 2004
Congratulations Firefox
Firefox 1.0 will be officially launched on November 9. It's a nice milestone. And they plan to celebrate it. There is also a campaign to raise enough money for a full-page advertisement on New York Times (it appears they already have enough for two pages). For $30 you can have your name printed on the ad :)
Firefox is truly a great browser. I've been using since the days it was called Phoenix.
However, there is more to Firefox than meets the eye. Firefox is based on XUL. A definition of XUL is "a cross-platform language for describing user interfaces of applications". XUL is based on XML and has a rich set of UI components. This allows you to create complex interfaces in XML, with a clean separation between presentation and logic. The combination of XUL and Firefox means that you can make entire applications leveraged on Firefox. Thus, Firefox becomes a framework for complex distributed applications, that aren't suited for plain web pages. This is really a great thing and can boost creativity to create better distributed applications that go beyond HTML's limitations. And it appears some companies are already realizing this too.
Monday, October 25, 2004
Creating PDF documents in Java
Everyday I see more of those pdf-icons, on web-pages, that link to a pdf version of the document. It's a very nice feature. It's very useful if you want to save the document for later reference, or even for printing.
I just experimented with a tool to create PDF documents in Java: iText. It's very simple to use, although the documentation could be better (I have to remember this feeling the next time I don't feel like writing documentation). I just used a few simple options to generate a PDF from a text-only article. However, it also supports graphics, and for what I've seen from their website, I think it supports just about everything you'd need to create a PDF. For me, I just needed the following lines to show a title and a body text (yes, my requirements were simple) :
Font chapterFont = FontFactory.getFont(
FontFactory.HELVETICA, 24, Font.NORMAL, new Color(0, 0, 0));
Paragraph title = new Paragraph(article.getTitle(), chapterFont);
Paragraph body = new Paragraph(article.getBody());
document.add(title);
document.add(body);
Well, in fact you'll need more code to initialize and to generate the actual PDF. Unless you're using Spring. Spring MVC has a very nice integration with iText. By using Spring MVC, a PDF document is just another way of showing the content. It's another kind of view, like a JSP or a XSLT. And it's resolved just like a JSP, by the View Resolver (you'll probably have to use ResourceBundleViewResolver, as, for example, InternalResourceViewResolver seems to be more suited to JSP pages). So I added the following to my "views.properties":
articlePDF.class=view.pdf.ArticlePdfPageThis specifies the class that generates the view to the client. Now, instead of forwarding to a JSP in the Controller, I forward to a PDF page, but for the controller it's completely transparent. I just need to tell it to forward to articlePDF view.
As for the class, it must extend AbstractPdfView. Below is the full class that I used :
iText seems good if you need to dynamically create PDF documents. However it may not be the best solution for every situation. It depends on what kind of content you have and how complex the final document should be. For example, if you already have a XML version of the document, there are simpler ways of generating a PDF from it (Apache FOP, for example).
public class ArticlePdfPage extends AbstractPdfView {
protected void buildPdfDocument(Map model, Document document,
PdfWriter writer, HttpServletRequest request,
HttpServletResponse response) throws Exception {
ArticleBean article = (ArticleBean) model.get("article");
Font chapterFont = FontFactory.getFont(
FontFactory.HELVETICA, 24, Font.NORMAL,
new Color(0, 0, 0));
Paragraph title = new Paragraph(article.getTitle(),
chapterFont);
Paragraph body = new Paragraph(article.getBody());
document.add(title);
document.add(body);
}
}
Monday, October 18, 2004
Simple and small
When I want to learn a new technology, most probably I end up creating a small application as an example. I think it's the best way to learn. So far, whenever I needed a database for that purposes I chose MySQL. It's fast, easy to set up and it's already installed in my Linux box.
However, from now on, I think I'll be using HSQLDB instead. For those of you that don't know, HSQLDB is a relational database, written in java. Its purpose is not to compete with the other major relational databases like oracle, MySQL, etc. Quoting from its website : It is best known for its small size, ability to execute completely in memory and its speed. To those characteristics, I would add another: Simplicity. Installing it, is as simple as copying a Jar to the filesystem. Running it, is as simple as getting a connection through JDBC (there are other ways, but all of them are quite simple to use).
Another example of its simplicity is the way HSQLDB stores information on disc: data is written as text files containing SQL code. The next time the database starts, the script is loaded into memory, creating the tables, inserting elements, etc. Creative :) But, this approach also has its limitations...it can eat up all your memory if your database is too big. Of course, this is not suited for that kind of applications, and as such with small databases it can be quite efficient.
Sunday, October 10, 2004
Java Puzzles
So, you think you know Java ? Think again...
This past issue from Linux Magazine will challenge your knowledge of Java with some curious and surprising puzzles. Enjoy ;)
Friday, October 08, 2004
Linux distributions and Gentoo
I've been using Linux for a few years, and I have tried several distributions, from Slackware to RedHat and Mandrake. Those are fine distributions, aimed for the general public. However, somewhere on the way I decided to be a little bolder. I wanted to know more about Linux and how it worked. And the best way to learn it was to build it myself. And that's how I started with Linux From Scratch (the name is very well chosen).
It took me a lot of time to have Linux properly installed and configured, but in the end it was worth it. The machine was lightning fast, everything was configured just right...And I had reached my objective: learning about all the small pieces that Linux is made of. No more seeking for configuration files (I had to write them all, so I knew where they were). And it was fast because every little thing was compiled in my machine, with all the right compilation flags.
However, it had a price: upgrading. As the name implies, it is from scratch....Meaning that I'd have to manually compile everything, that I wanted to upgrade, all over again...along with all the dependency tree (with the right versions).
I do like Linux From Scratch. In fact I think everyone should install it once. But I really mean once...unless you really enjoy compiling :)
So, I realized I would have to move on to a different distribution. What distribution would I choose next ? I didn't want to go back to RedHat'ish kind of distributions, now that I tasted the speed of a self-compiled distribution. And so, Gentoo appeared naturally. It's a compile-based distribution (packages are compiled in your machine) but with a great set of tools (portage) that allow you to upgrade all your system automatically. Let me give you an example:
emerge mozilla-firefoxYou only need this command to install Firefox in your machine...It will check for all the dependencies, download all the necessary packages, and then compile and install all of them. Upgrading to a newer version is exactly the same command.
So, now you have the power of having everything compiled in your machine and all you need to do is sit back and relax, while Gentoo does all the hard work for you :)
And gets even better....Imagine you want to upgrade all the packages that you have installed :
emerge worldAs simple as that...You can even schedule a cron job to do it periodically. So, at least for now, I'm quite happy with the Gentoo distribution :)
Monday, October 04, 2004
Introducing SiteMesh
In my first post I introduced webwork, a nice MVC from OpenSymphony. It turns out that they have a few more projects worth looking at. SiteMesh is a very simple framework, yet very useful. Its purpose is to apply layout and decoration to a web application.
It works in a very simple way. SiteMesh is based on Servlet Filters and so it sits between the user's request and your web application. It is clearly separated from your web application, that isn't even aware it exists.
As your web application finishes a client's request, the resulting HTML is processed by SiteMesh. Custom layout and style is then applied, and finally the modified HTML is returned to the client's browser.
There are several situations that can take full advantage of this architecture:
- You may need to choose from different layouts based on user's preferences (SiteMesh can choose decorators based on cookies, for example).
- You can choose to have different style files applied, based on the user's browser or even language.
- You may simply want to try out a different look for your application, before you actually make the changes visible to everyone else (you can define a decorator to be applied based on a specified request parameter).
Wednesday, September 29, 2004
Spring MVC versus Struts
As I continue learning the Spring Framework, the more I like it :)
Spring is composed of several modules, and although there is one Core module, which I should have read first, I began with the MVC module (mainly because I was more comfortable with it).
Comparing the Spring MVC with Struts (which I have already worked with), I find the Spring MVC more flexible and powerful. For example, when we develop a Struts application we normally worry with Actions and ActionForms. Actions are the entry point of any request made to the application, and ActionForms are specialized beans for encapsulating request parameters. And every "functionality" of the application must have one of these.
Spring has a more pragmatic approach. You don't need to have an ActionForm (called Command in Spring) per Action (called Controller in Spring) if you don't want to. There are plenty Controllers from which you can extend, that provide functionality for a variety of situations (multi actions in one class,
simple forms support, wizard-like interfaces, etc).
Another big difference is the Command Class (ActionForm). It can be any POJO (Plain Old Java Object). No need to subclass anything. This way, if you want, you can use your Model Beans along with their property types. The properties can have, in theory, any type you want, as Spring can already handle a large amount of common types (for the others you need to provide your conversion mechanism).
Another great addition is Handler Interceptors. Basically, you can define a chain of classes that "intercept" the call to any Controller, to do some pre-processing and/or post-processing (very useful for authentication issues for example).
There are other differences, mainly to inject some flexibility to the process: you can choose between different ways of deciding which Controller is called, which View is used, etc.
Despite all this, Spring still has at least one shortcoming: documentation. This is a recurrent issue in open source projects. However I feel that a lot of effort is being done to overcome this (the reference manual is very well written, albeit still incomplete). For example, I couldn't find any reference in the web that explained how to use the AbstractWizardFormController (Controller responsible for the wizard-like interface). I hope to post here a small tutorial for it after I'm finished with some experiments...and if in the meantime nobody else writes one ;)
Saturday, September 25, 2004
JSTL goods and bads
Although JSTL (JavaServer Pages Standard Tag Library) already exists for a while, only a few days ago I gave it some attention. It's a nice set of tag libraries, specially those for manipulating variables, strings and internationalization...However, I think they got a little bit carried away....I'm talking about the support for SQL queries. Best practices dictate a clear separation between Presentation, Business Logic and Database tiers. They realize that, as they state that large applications should use other means (DAOs, EJBs, etc) to access database. But, even for simple applications I find their usefulness very limited...
What is a simple application ? One that is small ? One that we want to write as fast as possible ? It really can be faster and smaller in some cases (one can have a custom taglib that based on a SQL query can automatically generate an HTML table). But I think sooner or later one have to do some maintenance to the application. And even in this kind of applications we will have more work changing things done this way, than with a clear separation between presentation and database.
But overall, a useful set of tag libraries, as I said before.
Wednesday, September 22, 2004
Inversion of Control
When I started reading about Spring, one concept kept appearing : Inversion of Control (IoC). IoC is an apparently simple design pattern, but rather interesting. Here's a fast example of how it can be used:
Imagine you have class Foo and class Dummy. Class Foo has somewhere inside, the following lines:
...Pretty standard stuff :)
Dummy dummy = new Dummy();
dummy.doSomething();
...
This example tells us one thing: there is a dependency between Foo and Dummy. Sometimes we don't want that....we need to use functionalities provided by Dummy, but we don't want to be attached to the Dummy implementation, because there could be several different implementations of Dummy's functionality. And if wanted to use Dummy2 implementation, we would have to change the code for Foo. So, how do we make those two classes independent ? First, instead of using directly the Dummy class we could use a java interface that publishes the functionality we want:
public interface Dummy {And then, create an implementation for it:
public void doSomething();
}
public class DummyImpl implements Dummy {And in the Foo class we could have:
public void doSomething(){
...
}
}
Dummy dummy = new DumyImpl();Now, we call mehods from the interface instead of directly from the implementing class. This is good, but not enough...We still have "...new DummyImpl()" in there. So the dependency isn't gone yet. To get rid of the dependency we can't instantiate the DummyImpl inside Foo. Instead we could have a setter method in Foo that receives an already instantied implementation of Dummy:
dummy.doSomething();
...Now Foo is completely independent of any Dummy implementation. But if we don't instantiate Dummy inside Foo, we need to instantiate it somewhere. Or maybe not....that's where a Component Container comes in: it will instantiate it for us :)
Dummy d;
public void setDummy(Dummy d){
dummy = d;
}
...
In Spring we can define, in a XML file, the classes that we need to use, along with their dependencies. So if we run our little example trough Spring, we would have to specify the Foo class, the Dummy implementation and their dependency. Next, when we need an instance of Foo, we ask it from Spring. It will then instantiate Foo, instantiate Dummy and call setDummy() on the Foo class. Only then it returns to us an instance of Foo. This is great if we want to change the Dummy implementation: we only need to change the name of the class implementing Dummy, in the XML file. You can read the Spring reference manual for a detailed explanation of how to use this.
There are other Component Containers besides Spring, namely the Pico Containter (note: Spring provides a lot more than just a Component Container). I haven't studied them, though...
You can have a more in-depth (and plain better) explanation of what is Inversion of Control (also called Dependency Injection) in this article.