Minggu, 26 April 2009

Fujitsu LifeBook U810 Mini-Notebook PC


The all-new Fujitsu LifeBook U810 has a 5.6" WSVGA Crystal View display and weighs only 1.56 lbs. It is one of the world's smallest convertible touchscreen notebooks. Perfect to allow you to work, IM, access e-mail, watch video, listen to MP3s, browse the Internet, take pictures, or stay connected with friends and family. 5.6WSVGA Crystal View Touch screen (1024 x 600) Display Integrated 0.3MP (640 x 480) webcam Intuitive touch or pen or using the built-in QWERTY keyboard inputs Integrated Intel Graphics with 3D Accelerator Video 1 Type I/II CF Card slot, 1 SD Card Reader Fingerprint reader Built-in 802.11a/b/g Wireless, Bluetooth v2.0 1 USB 2.0, 1 Headphone out; 1 Microphone-in, 1 VGA and 1 RJ-45 via adapter connector Unit Dimensions - 6.73 (L) X 6 (D) X 1.26 (H) Unit Weight - 1.56lbs

Acer Aspire One 8.9-inch Mini Laptop (1.6 GHz Intel Atom N270 Processor, 1 GB RAM, 160 GB Hard Drive, XP Home, 6 Cell Battery)


A great choice for business travelers who like to travel light as well as those who need extra-long battery life, this affordable ultra-lightweight Acer Aspire One (LU.S040B.162) weighs just over 2 pounds and is packed with a 160 GB hard drive and Windows XP operating system. It has a vibrant 8.9-inch CrystalBrite WSVGA LED backlit display, integrated webcam for easy video chatting, and Intel's latest mobile processor--the Atom. Offering a cool deep blue hue, the netbook's smooth surface is comfortable to touch, and it's accented with distinctive details, such as the attractive orange hinge ring.
Designed especially for mobile devices, the 1.6 GHz Intel Atom processor uses a brand new design structure new hafnium-infused circuitry--which reduces electrical current leakage in transistors--to conserve energy, giving you more time away from the wall outlet--up to 5.5 hours with the included 6-cell battery. Other features include 1 GB of installed RAM (1.5 GB maximum), 54g Wi-Fi networking (802.11b/g), multi-format memory card reader, multiple USB ports, and built-in email, web browsing, and digital media applications.

It comes preinstalled with the Microsoft Windows XP Home operating system, which offers more experienced users an enhanced and innovative experience that incorporates Windows Live features like Windows Live Messenger for instant messaging and Windows Live Mail for consolidated email accounts on your desktop.

Portege M600 Notebook

With a starting weight of 1.89kg and sporting an attractive glossy onyx blue or titanium silver, or the new glossy white casing and Intel's latest Centrino® Processor Technology, a 13.3" Wide Clear SuperView TFT display, integrated 1.3 Mega pixals camera, Fingerprint Security Suite and 3D HDD Motion Sensor, the new PORTÉGÉ M600 is specially designed for mobile business users, SOHO, students as well as individuals demanding a stylish and affordable ultraportable without compromising portability and computing power in the field.

An Image Compositor Technique for a Planet


I had a need to develop a capability within Titan Class Vision to composite many images representing parts of our planet at different resolutions. In particular, these images could be quite large; perhaps 200MB each.

I decided to give memory mapped files a go using the low-level paging mechanisms available to modern operating systems. In a nutshell we’re looking at the use of mmap and munmap. Memory mapping files is very fast and highly optimised - it is by far the fasted way of reading bytes in to memory from disk.

My main image of the planet is about 200MB in size. What I do is have a Windows BMP file mapped saved in BGRAUnsignedInt8888Rev form (this is the optimal format for something to be textured on Mac OS X) and then use memory mapped file IO to page the BMP into memory. I then have an image compositor object that is able to composite many of these images e.g. I have a BMP for the entire planet, and one for just a given state of Australia (NSW). When it comes to client code making a request the compositor assembles a composited buffer of my layers as required and in the resolution required. This composited buffer is then sent to a texture using GL_STORAGE_SHARED_APPLE as an optimisation. I also keep some of these textures around for situations where I know in advance that I'm going to zoom in on something; it is then consequently very fast when it comes to rendering as no dynamic composition is required.

Oh, and I'm using the Mac OS X Accelerate Framework for high quality and high performance scaling given that Titan Class Vision targets this platform.

It all works pretty well and is fast, even on a tired old G4 Powerbook. Multiprocessors are utilised given the Accelerate Framework. One naturally has to be considerate of virtual memory usage with the compositor, but that is a resource management exercise.

Here’s the class structure that I came up with; feel free to use it in your own work but please be kind and include a reference to this page and some credits.


namespace WorldLayerCompositor {
typedef unsigned long BGRA;

inline unsigned short SwapInt16LittleToHost(
unsigned short arg
) throw();

inline unsigned long SwapInt32LittleToHost(
unsigned long arg
) throw();

class Layer {
public:
virtual ~Layer();

inline void GetCentreLatLong(
double& outLat, double& outLong
) const throw();

inline double GetResolution() const throw();

inline void GetSizePx(
unsigned& outWidthPx,
unsigned& outHeightPx
) const throw();

virtual BGRA* GetBuffer() const throw() = 0;
};

class MemoryMappedBMPFileLayer : public Layer {
public:
MemoryMappedBMPFileLayer(
const std::string& inFile
) throw();
virtual ~MemoryMappedBMPFileLayer();

virtual BGRA* GetBuffer() const throw();
};

class LayerCompositor {
public:
LayerCompositor() throw();

typedef std::list >
LayerList;

LayerList::iterator AddLayer(
boost::shared_ptr inLayerP
) throw();

void RemoveLayer(
const LayerList::iterator& inLayerIter
) throw();

bool GetNextSubBuffer(
unsigned long** inBGRAUnsignedInt8888RevPP,
unsigned& outSubXPx,
unsigned& outSubXPy,
unsigned& outSubWidthPx,
unsigned& outSubHeightPx
) const throw();

void GetEffectiveSizePx(
unsigned& outWidthPx,
unsigned& outHeightPx
) const throw();

inline void SetSubRegion(
double inLat, double inLong,
double inResolution,
unsigned inWidthPx, unsigned inHeightPx
) throw();
};
};


I intend to evolve this class much further and make it considerate of planet related things. For example if a request is presently made for a region that extends over the dateline then I move the region back either east or west. In the future I’ll be handling this so that the compositing considers the dateline.

Another thought is to be able to add layers described in vector terms using SVG and GML... I think that there are some interesting possibilities.

Software Development and the Global Economic Downturn

There appears to be plenty of software development work around at the moment.

A manager at a large company that I have been doing some work for recently commented that when a business looks to contract, the demands on IT increase.
I agree.
The reason for this is automation; something that the business should always strive for of course. The point is that they don't. Large companies would rather throw multitudes of people at a problem than optimise operational tasks... until external pressures such as world economic downturns occur. I have friends that work for some other large companies that are typically regarded as being innovative. One of the common threads is how, from an operational perspective, they always automate the mundane tasks as much as possible.
So for me at least, as a small software developer, there appears to be plenty of work around and a great deal is focused on automation.

Anatomy of a service/Apache anyone?



Here is a pattern that I have implemented a few times now.

I have had requirements to provide a web service, typically SOAP based, that persists something in a database (or whatever). Here's a general flow of events that I'm finding works particularly well:

1. Web browser to web service
A web application (typically Javascript executing in a web browser) sends a SOAP request. Given that my web service is written using Apache CXF it can generate the required Javascript for the browser's invocation by appending a ?js parameter to the web service. Check out the CXF doco to see how.

My CXF based web service is developed on a contract-first basis i.e. the SOAP interface is defined using Java. The reason I like that (as opposed to a WSDL-first approach), is because WSDL appears overly complex to me. It is much easier, in my most humble opinion, to create the Java interface and have CXF generate the WSDL. I reckon that CXF does a pretty good job of this too.

My web service has the responsibility of marshaling the SOAP XML into a Java object that is representative of my information model. Here's a critical point: you must have an information model to work with aka domain model. If you don't have that nutted out very well then you tend to spin wheels.

The classes of the information model are used as a normalised object for messaging and also for persistence. The web service marshals the XML into model objects, performs various integrity checks and then forwards the model objects on to a JMS queue.

I'm using Spring's JMS template classes to forward on my JMS messages; when it comes to sending messages, lots of code is taken care of using these classes (I've also programmed JMS without them so I can make a direct comparison as to what Spring offers in this respect).

In summary my web browser invokes a SOAP web service using some nice Javascript code that CXF can generate. My CXF based service has a relatively simple roll of marshaling the SOAP XML input into a domain object of my information model and sending it to a queue.

Oh, by the way, I'm using Apache Tomcat to host my CXF based web application.

2. Sending to ActiveMQ
I'm typically sending to queues dedicated to specific functions e.g. "queue to persist the biometric fingerprint of a person". These queues are persistent (a JMS default) so that messages are stored until they can be guaranteed delivery.

I like ActiveMQ. Apart from the price tag being very attractive (free), the Apache products are built to a very high standard. ActiveMQ is fast and reliable.

3. ActiveMQ replies
Once ActiveMQ replies back to my web service guaranteeing delivery, my web service replies to my browser application.

4. Browser processes the reply & Camel service consumes queue
These are concurrent activities. Of course the browser delivered the SOAP request in (1) asynchronously and registered call backs for processing any errors and successful responses. This way the user's browser remains responsive.

The design of my browser application is very much around the store-and-forward paradigm. Depending on how critical it is to feed back the results of an operation, it is generally adequate to inform the user that their requested has been posted. In the case where the request is involved in retrieving, say, customer details, receiving a response is important so I would use a request-reply enterprise integration pattern. However in many scenarios, store and forward is perfectly adequate and really suites a browser style application.

Almost as soon as ActiveMQ makes a message available to the queue (2) my Apache Camel based service is able to consume it.

I like Apache Camel. It really helps me assemble services very loosely given the strong promotion of enterprise integration patterns. You can build content based routers, aggregators, request-reply mechanisms and so forth - Camel provides the framework for you to build your services on. Camel almost forces you to think about the correct separation of concerns given its abstractions.

I typically deploy my Camel based services using jsvc - Apache's daemon toolkit (part of the Apache Commons project). jsvc is a very simple wrapper so that you can make your application run as a service on Unix and Windows based platforms. From a programming perspective, you implement a lifecycle interface (init, destroy, start, stop) and that's about it. I want to move toward using OSGi, and I have played with it. However I have not found a OSGi based container that I'm totally happy with yet (they don't appear to have totally satisfied use-cases associated with administrators - just programmers at this point).

I have many services do many things but a common task is persisting and retrieving data from a relational database. For that I program to JPA and use Hibernate as my JPA implementation. I prefer JPA over Hibernate directly so that I have the ultimate freedom of using other ORM implementations should I need to - I don't think I will be needing to at this point though as Hibernate is great.

Performance
I get great performance. The slowest parts tend to be associated with persisting data in an RDBMS (nothing to do with Hibernate etc - normal RDBMS stuff) and network latency. CXF and Camel fly along - you're mostly counting small units of milliseconds with these technologies.

Scalability
I've not had to do much in the way of scaling yet, but I know I have the correct separation of concerns to cope with most of what the world can throw at these services. In the performance testing that I have done I tend to exceed my customer's expectations. As stated above, it is network latency that appears to be the bottleneck given that the internet often sits between the browser and the web service.

Apache, Apache
It may appear that I work for Apache or something, but I don't. I do value the Apache Projects though. In summary here's what I'm presently using:

* Active/MQ
* Camel
* CXF
* Maven
* MINA
* Tomcat

Mutual SSL authentication and LDAP authorisation for ActiveMQ

I now have quite a lot of software infrastructure supporting the business of tracking things that move; particularly aircraft flights. The back-bone of all of this is ActiveMQ. ActiveMQ is an incredible messaging work-horse and easily handles 300 messages per second being received from a number of radar sensors.

I have also re-developed my Titan Class Vision client application using JavaFX (the subject of another blog entry). The goal of this re-write was to be able to deploy Vision on a wider variety of platforms than I could do so previously. Before the JavaFX version Vision was a turn-key hardware/software combination written in Objective-C/C++ and ran on Mac OS X only.
With the advent of the new version I am able to deploy Vision across the internet. Given this there are suddenly many more potential candidates in terms of users and I had a security question to consider: how can I demonstrate to whoever is concerned that I have made every effort to ensure that this sensitive real-time flight data is not being mis-used by anyone.
Authentication
My requirement therefore became one centered around SSL. ActiveMQ permits connections to be established using SSL. Server only authentication is fairly straightforward and covered here. What I was after though was client certificate verification; otherwise known as mutual SSL authentication. In a nutshell, the server verifies the client's certificate as one it trusts, and the client verifies the server's certificate as one it trusts.
When it comes to the client application, in my case the JavaFX application, you need to make sure that the client's keystore is accessible. Don't do as I did and try using the JRE's default keystore for this purpose. I just couldn't get that to work. Instead do as the ActiveMQ SSL page suggests and provide the client with its own keystore.
Another tip for the client, is to set the javax.net.ssl.* properties within the application itself; before you try establishing the JMS connection of course. I express the location of the client's keystore in relation to the user.home system property.
From a broker (server) perspective, one has to use a JAAS LoginModule that permits certificate based authentication. Fortunately I found an ActiveMQ-JAAS class named CertificateLoginModule (of all things). One very subtle thing to note: when specifying the use of this login module in activemq.xml you must use the jaasCertificateAuthenticationPlugin element instead of the jaasAuthenticationElement. I think that this is because the certificate login module requires a different login callback to obtain the client's certificate.
CertificateLoginModule is only half the picture in the same way that authentication is only half the picture. Authorisation is required and the CertificateLoginModule has to be extended to support this; the login module does not know how to authorise a certificate. I can help there as I have provided this code. More on that later though (I insist on you reading the rest of this entry!).
Finally on authentication, you need to tell ActiveMQ that you want to perform client certificate authentication; it will not do it unless told to do so. You do this by specifying "needClientAuth=true" parameter on the ssl transport in activemq.xml.


You might want to disable the other connectors and open up your firewall just for 61617 SSL connectivity. I think that once you make authentication a priority with the broker then you need to give up the non-secure connectors.
Authorisation
With authentication done (I know who you are, you know who I am), I needed to deal with authorisation (now I know who you are, what am I going to allow you to do). For this I wanted to centralise my user and group/role information. ActiveMQ allows you to specify user/group associations in its configuration file, but I wanted to do what all grown-ups do: specify my users and groups in a centralised LDAP directory.
The CertificateLoginModule requires extension to specify how authorisation is done. I have created a CertificateLoginDirectoryRolesModule that will take the subject DN from each client certificate presented (there can be many but typically just one), and then call upon my LDAP store to determine which groups the DN is a member of.

I have set up my LDAP server (ApacheDS - fabulous) to allow anonymous access but also enabled access controls. This means that, by default, the LDAP server permits very little authorisation with just admin access. I then created a group named "activemq" off the "groups" node and used an ACI to allow anonymous searching of that group. I ended up with a group hierarchy as follows:


ou=system
ou=groups
ou=activemq (anonymous users can see this and below)
cn=jms-services
cn=activemq-users
cn=com.classactionpl.javaFlightTopic.Subscribers


Correspondingly here is my authorisation mapping within activemq.xml:






"
read="jms-services"
write="jms-services"
admin="jms-services" />

"
read="jms-services"
write="jms-services"
admin="jms-services" />

read="com.classactionpl.javaFlightTopic.Subscribers" />

"
read="activemq-users"
write="activemq-users"
admin="activemq-users" />






What the above states is that jms service providers, such as my Camel based applications, can effectively publish and subscribe to anything. However my client belongs to the javaFlightTopic.Subscribers group and the activemq-users group and so can only consume from a specific topic and perform all required advisory services; the latter being an ActiveMQ requirement.
It is possible to express the authorisation mappings in an LDAP store as well. We'll see if the need surfaces.
Source Code
I have created an open-source project named jaasloginmodules that hosts this JAAS login module and have tested and used the classes. The CertificateLoginDirectoryRolesModule is ready for download and use.

Java Persistance with Hibernate - what I had to re-visit after reading the book

I recently had a JPA related issue and decided that it was high time to purchase a good book on the subject; particularly as JPA/Hibernate tend to play a very important role of course.


I ended up discovering "Java Persistance with Hibernate" (Bauer, King). This book is a revision of "Hibernate in Action" and it certainly reads as a complete reference to all things ORM. What attracted me to this book in particular was its coverage of JPA. I prefer to use JPA over Hibernate interfaces so that my ORM implementation choice is left open as much as possible. Having said that Hibernate gives me no reason to move to another ORM.


I must have been using Hibernate/JPA for the past 18 months. When I started I recall wanting to skim over the ORM subject as much possible in order to get things done i.e. I did not think deeply about the implications of some of my decisions; or rather I misinterpreted the way things worked e.g. the 2nd level cache. To this end here is a list of things that I found myself re-visiting:


* property annotations instead of field annotations;

* using the Collection interface instead of the Set interface;

* cache annotations;

* bi-directional one-to-many relationships;

* immutable entities


Using property access can boost performance when performing operations on a collection (also as opposed to a set). For example if you want to add to the collection Hibernate does not have to load the existing collection from the database. If you use a Set then it does in order to guarantee Set semantics.


Understanding the second level cache is very important and if you don't have time to learn it then disable it. It is also a good idea to understand whether you have immutable entities so that you can use the Hibernate @Entity(mutable=false) annotation and flag to the cache that the object is read only. This permits Hibernate a few optimisations including a minimisation on the number of update statements that require preparing at startup.


I always wondered why my bi-directional relationships were generating a join table. It finally sunk in that the mappedBy attribute on an association describes the foreign key. No more join tables for bi-directional relationships.


There's no excuse; just buy the book if you're doing any ORM. Having said that even before I made these changes Hibernate was performing exceptionally well!

Address Bar Default Search Engine

Search engine war cause some confusing effect to our browser. Oneday, withour your consent, when you try to search through Location Bar or Address Bar, the place to type web address, the search result is coming from your unwanted search engine.

In Firefox, to set it back to the previous one, e.g. Google, use the following method

1. open firefox, type in “about:config” in the keyword box (address bar).
2. search for “keyword.url”
3. right click on the URL and select “modify”. Or you can double clik it.
4. change the content to Firefox default: http://www.google.com/search?ie=UTF-8&oe=UTF-8&q=



More info:
http://kb.mozillazine.org/Keyword.URL

Questionnaire to survey the use of the fact Ttoiralbermjiat free (open source) in the Arab World


General Engineering at the Arab interest in the activities of open-source publish the questionnaire on the open source on the General Assembly, which has been under the supervision
Arab Organization for Education, Culture and Science Department of Science and Scientific Research
For more information, please visit the project site

Please help us to mobilize the following questionnaires to be able to deploy the results of the Arab world by the Arab Organization for Education, Science and Culture and support entrepreneurship and help build a nation-wide strategies
Section of companies and organizations and educational institutions
Section of software developers
Or the possible download files Ballglp Arabic or English and sent to us directly by e-mail

admin AT handasarabia.org

Introduction to the Free Software



In the first days of the proliferation of electronic computers (between 1945 and 1975) was the developers of software programs, including the freedom of exchange, in order to exchange experiences and ideas. Even the famous Unix operating system developed by researchers at AT & T was distributed to investors Ktrameez software (with the right of the amendment it) for the symbolic sum.

However, things have changed gradually and became the program development since the seventies and eighties of the last century, the richest source of large developers, which made many of them sell their packaged in boxes sealed for unknown content is only able to modify themselves. And applicable to the Unix system as well.



Free Software / Open Source (FS / OSS)

Emerged in the early eighties of the last century the idea of free software / open source to get rid of the monopoly of proprietary software (commercial / closed), known as the Proprietary Software, which is a closed black boxes for the user knows the maze of schemes and structure can not be adapted or modified without reference to the owning company, which gives the investor the right to use only by leave (often expensive) reservation of the company internationally and control over their rights and how to use the software in question.

Started brilliant programmer Richard Stallman, 1983, at the University of MIT's famous rebellion against the constraints of proprietary software. And succeeded in the compilation of a number of creative scientists around the idea of free software in 1984 and established a non-governmental and non-profit called:

The Free Software Foundation (FSF) Free Software Foundation, and went working in the building of a free operating system (a system similar to UNIX) called the Gnu. They publish their work freely all the steps for the benefit of programmers who can turn the world to assist in the development of this system and benefit from it.

In the early nineties of the last century, the Finnish programmer able illustrious Linus Torvalds, who has been following the development of the Gnu, to complete a central Kernel (linking the various vehicles for the operating system and regulate the relationship between them), called Linux, and strongly developed, taking advantage of the key factors; the speed of communication via the Internet, and the participation of thousands of programmers around the world to criticize and to improve and develop its work. Thus building a free operating system (open source) is known as an integrated, Gnu / Linux (or Linux as the error only); and that was a significant breakthrough for the idea of free software and spread globally.

Free software is built on the philosophy of the following four levels of the concept of freedom:

Freedom 0: the freedom of the investor in the operation and use of software, study and know the secrets and how they work.

Freedom 1: Freedom of the investor is to modify the code and adapt it to the needs and wishes.

Freedom 2: Freedom of the investor in the copy of software and support to help a neighbor or a friend or a colleague or another user.

Freedom 3: Freedom of investment in the deployment of software developers are available.



0 requires freedom, freedom freedom 1 and 3 encoding software to provide Source Code to the investor to understand and be able (if any) to modify and adapt the program commensurate with the requirements. These four freedoms, which are collectively characterized by the free software on proprietary software.
One of the most important legal concepts that he created the modern Stallman is the so-called "People's license" General Public Licence (GPL) which guarantees the continuation of the above-mentioned freedoms in the circulation of free software / open source, for the symbolic sum.

Also contributed to the University of California - Berkeley branch is also the addition of vehicles and many improvements to the system of Unix. In the end, brought together the efforts of Berkeley in the operating system, a strong open-source published in a number of issues:

NetBSD, OpenBSD, FreeBSD.

It should be noted here that the operating system, Mac OS X Apple's built on these issues.