Planet Webservices

August 06, 2008

Sam RubyMinimalist Markup, now text/html Compatible

Bug 311366 is resolved in Firefox 3.0.1.  It may, in fact, have been fixed earlier; but my initial testing was flawed.  Thanks go out to Anne van Kesteren and James Graham for spotting the problem that was preventing me from seeing that it was fixed.

Demonstration of an minimalist HTML5 page served as text/html.

Tom JordahlNew Java AMF Client feature in BlazeDS

You may not have noticed, (what, you aren't subscribed to the BlazeDS commits forum?) but a few weeks ago Mete committed an enhancement that adds a Java AMF client API to the flex-messaging-core.jar file in BlazeDS.

You can find a specification posted here and its linked from the Developer Documentation page.

What does this do? Well, you can use this API to call (from Java) Flash Remoting end points in BlazeDS, LiveCycle Data Services, ColdFusion, PHP or whatever you have that supports AMF. Which of course is a published specification.

This feature is available in any nightly build after 3.1.0.2602 or in the trunk nightly build. Find those builds on the BlazeDS build download page.

Steve LoughranMy other computer is a different datacentre

Fitz explains the story behind the my other computer is a datacenter sticker. I have one of these on my laptop, with all the google bits cut off.

This sticker will appear in our talk at the Hadoop UK user group event, now oversubscribed, in London in two weeks time. I have an idea for a cool sequence of photos.

Now, here is another idea for Brian: can we have some T-Shirts that say "No, I will not fix your datacentre" ?

Charitha KankanamgeHow to use tcpmon inside Eclipse

Apache TCPMon is an utility that allows the messages to be viewed and resent. It is very much useful as a debug tool. If you don't know much about this tool, you can find more information here , here or here.

I found an extension of this great tool, which can be included as an Eclipse plugin so that developers can monitor message transmission within their workspace without opening a separate tcpmon instance. It is very cool indeed.

Saliya Ekanayake, a colleague at WSO2 has developed this utility as part of his university project. Lets see how this tcpmon plugin can be used in Eclipse WTP.

1. If you haven't done yet, download and install Eclipse WTP

2. Download tcpmonitor-1.0.0.jar from here

3. Copy tcpmonitor-1.0.0.jar in to Eclipse_home/plugins directory

4. Start eclipse

5. Select Window --> Show View --> Other --> Tcp Monitor --> TCP Monitor

6. Tcp monitor will be added as an view tab.



Now you can configure the necessary port settings and trace message transmission.

August 05, 2008

Davanum SrinivasToo much information - Web Services Feature Pack for WebSphere Application Server V6.1

http://www.redbooks.ibm.com/redbooks.nsf/RedbookAbstracts/sg247618.html?Open

Apache TuscanyApache Tuscany SCA Java 1.3 released

Today Apache Tuscany made a 1.3 release of the SCA Java runtime. This release is special as its the first SCA release since Tuscany graduated from the Incubator to be an Apache top level project. See the Tuscany download page for details on the release.

August 04, 2008

Eran ChinthakaBlue Angels in Seattle

Charitha KankanamgeHow to deploy Apache Axis2 on WebLogic 10

I have already discussed the steps to deploy Apache Axis2 on IBM WebSphere, JBoss and Resin application servers. In this post, I'm going to explain the procedure to deploy Axis2 on BEA Weblogic 10 server.

Pre-requisites:
Download and install BEA Weblogic 10.

Step1

Create a new weblogic domain by running config.sh located at WebLogic_HOME/wlserver_10.0/common/bin directory.
Lets assume the new domain is axis2.

Access your weblogic domain direcrtory and start weblogic (Go to WebLogic_HOME/user_projects/domains/axis2/bin and run startWebLogic.sh)

Step 2

Download Axis2.war from here

Step 3

Create a directory in your file system (i.e:- /opt/axis2) and copy axis2.war to that directory. Extract axis2.war file (unzip axis2.war)

Step 4

Access WebLogic administration console (In a browser, access http://localhost:7001/console)

Log in to administration console (You should have configured username and password for admin console when creating your WebLogic domain)

Step 5

In the left navigation menu of the WebLogic administrative console, select Lock and Edit and click on Deployments.
Click on Install and Select the path of axis2 directory where we have extracted axis2.war file.



Click on Next.

Select the default option, Install this deployment as an application and click Next.

Accept the default settings in Optional Settings page and click on Next.

Click on Finish in the last page of the wizard.

Click Activate Changes in the left menu.

Step 6

Select Lock and Edit again and click on Deployments in weblogic admin console. You will see axis2 listed in the Deployments table.
Select axis2 and click on Start-->Servicing all requests.

In Start Deployments page, click on Yes.

Thats all for deploying Axis2 on Weblogic. Lets access axis2 admin console and validate the installation

Step 7

Now open a browser and go to http://localhost:7001/axis2
Axis2 welcome page will be displayed.

Step 8

Verify the status of installation. Click on 'Validate' link. You should see the following 'Axis2 Happiness' page.



Now you can log in to Axis2 administration page and start deploying services.

If you encounter any class loading issues with some of your services, configure the <prefer-web-inf-classes> element in WEB-INF/weblogic.xml as specified in Axis2 Application Server Specific Configuration Guide.

Charitha KankanamgeApache JMeter book is published




There are no much books available on test automation and tools. In order to fill the void in the software testing bibliography, Emily H. Halili decided to put together the basic concepts of test automation and performance testing with JMeter.
This book was designed to pave the path for readers to get detailed insight on JMeter as well as a basic reference guide. I was the technical reviewer of this book. It consists of 140 pages and 8 chapters, starts with a short introductory chapter on advantages of test automation and requirements of automated tests.
Chapter 2 focuses on an overview of JMeter followed by setting up environment and installation.
Chapter 4, The Test Plan shows you all the parts of JMeter test plan. It explains all elements of test plan and how they interact together.
Use of Jmeter in load/performance testing is demonstrated in chapter 5. In chapter 6, you will get information on the tools in JMeter that support functional or regression testing.
Chapter 7 and 8 describe some advanced topics such as database servers, using regular expressions etc..

One of the many beuties of JMeter is that you don't need to have prior programming skills to use it, making JMeter one of the most popular open source testing tools in the testing community.
This book will definitely help testers as well as programmers, project managers to get better understainding on JMeter.
The book is an easy read and you should be able to complete most of the demos within very short time. I'm proud to be the reviewer of this book and I'd recommend this as a must-have item in book shelves of any QA/test engineer.
For more information, please visit Packt publisher's website.

Charitha KankanamgeHow to validate a WSDL using Eclipse

When you create a wsdl file from scratch or use an already designed one, you must make sure it is valid. In other words it should;
  • consist of well-formed XML (All tags should be closed and nested properly)
  • conform to XML Schema
  • comply with the rules and standards defined in WSDL specification
  • valid with the rules defined by WS-I (Web services interoperability organization)
Eclipse Web tools project (WTP) provides a very useful tool which validates a wsdl against above rules/standards.
Lets see how we can validate an existing wsdl using Eclipse wtp.

1. Download and install Eclipse wtp

2. Open eclipse IDE

3. Start to create a new wsdl (File --> New --> other --> Web Services --> WSDL)

4. Give a name to the wsdl (you can provide the name of wsdl which needs to be validated) and click on next. Accept the default options and click on Finish.

5. You will see a design view of a new wsdl file. Move to source view by selecting "Source" tab.

6. You will see an skeleton source of the new wsdl. Just remove it. (remove all elements in the wsdl)

7. Copy the contents of your existing wsdl (Suppose it is Myservice.wsdl) and paste in the source tab.

8. Save it by selecting save button in eclipse tool bar.

9. Right click on the wsdl file and select Validate

If your wsdl has errors, those will be shown in problems pane.

You may notice that we create a new wsdl, remove its content and copy the existing (already created) wsdl in to source view of wsdl validator. I suggest that as a work around because I could not find a way to import an existing wsdl directly into wsdl validator.

Charitha KankanamgeWeb application testing in Ruby(Watir) - 2 minutes guide

Watir (pronounced as Water) is a free open source tool which can be used to automate web applications. It is an extension of Ruby programming language. Unlike most of the other testing tools it gains the advantage of powerful features of Ruby and simulate browser interactions in very simple manner.
Lets see how a simple google search is automated using Watir in few steps.

Pre-requisites
Install Ruby (1.8.5-24 or later)

Step 1

Install Watir. Open a command window or shell and issue the following commands
gem update --system
gem install watir

The above two commands update gem installer and then install watir in your system.

Step 2

Open SciTE ruby editor or notepad and start to create the following script.

require "watir"

ie = Watir::IE.new

ie.goto("http://www.google.com")

ie.text_field(:name, "q").set("WSO2 WSAS")

ie.button(:name, "btnG").click


Step 3

Save the above file as SimpleTest.rb and run it from the command line by typing SimpleTest.rb
You will see that an Internet Explorer browser instance will automatically be popped up, access google, type "WSO2 WSAS" text and click on search button as a user interacts with the web site.

Step 4

Lets see what each of the above statements of our test script do.

require "watir" - This is similar to an import statement. This tells ruby script to use Watir as an extension library

ie = Watir::IE.new - Instantiate a new IE browser instance and open it

ie.goto("http://www.google.com") - Instructs IE instance to access google.com

ie.text_field(:name, "q").set("WSO2 WSAS") - Set text, "WSO2 WSAS" as the search query

ie.button(:name, "btnG").click - Click the "search" button

If you need to simulate web interaction with Firefox, you can use FireWatir, which allows to write test scripts for Firefox browser.

August 03, 2008

Afkham AzeezHandling java.lang.InterruptedException

What do most Java developers do when they are faced with handling an InterruptedException? This should look very familiar;

try{
// Do something
} catch (InterruptedException ignored){
log.debug("Exception", ignored);
}

We've seen plenty of code that does this, it makes us wonder whether InterruptedException is just useless and simply clutters the code. The question is, "Is this the correct way of handling InterruptedException". The answer is, "No, it isn't always the best way of handling this exception".

Dealing with InterruptedException is an excellent article that points out how this exception needs to be handled in different scenarios. It is a "must read" for any Java developer. When an InterruptedException is thrown, the interrupted status of the thread is cleared. Hence, at least we should set the interrupted status again using the Thread.currentThread().interrupt() method so that somebody else could handle it, if necessary.

August 01, 2008

Sam RubyOpen Standards

Simon Phipps: Field-of-use restrictions have no place in open source.

+1

Keith ChapmanBenefits of an Open Source SOA solution

Stumbled upon an interesting article on the benefits of an Open Source SOA solution, and I cant agree more with it. It lists 5 advantages of using Open Source to leverage SOA needs. They are,
  • Try before you buy
  • Lower cost of entry
  • Cost effective support
  • Core competency
  • For the people by the people
Let me elaborate on point 4 above (Core competency) with respect to WSO2. At WSO2 we've build a complete SOA platform from scratch. The core components we use was designed with SOA in mind. They were not mere afterthoughts.

The article also mentions WSO2 as a major open source stack provider. And its a pretty impressive stack at that, at WSO2 the whole SOA stack we have is open source. Products of this stack include the Fastest Enterprise Service Bus (ESB) in the form of the WSO2 Enterprise Service Bus (ESB), the award winning WSO2 Web Services Application Server (WSAS a runtime for hosting services), the WSO2 Registry, the WSO2 Mashup Server (which helps you compose services using JavaScript with the E4X extension). In addition to this it also provides frameworks such as WSF/Spring, WSF/C and WSF C++ which help build/invoke services. It has also extended its frameworks to the various scripting languages in the form of WSF/PHP, WSF/Ruby, WSF/Python and WSF/Perl. All these frameworks help you build/invoke enterprise grade services which may have WS-Security, WS-Reliable Messaging, WS-Addressing etc...

WSO2 has been around for around 3 years and thats a pretty impressive product portfolio for a company that young.

Paul FremantleWhy interop?

This excellent article - "Can AMQP break IBM's MOM Monopoly?" - explains in clear terms the difference between API standardization (JMS) and wire-level interop standardization (AMQP).

This is a distinction that lies at the heart of WSO2. We founded the company on the premise that wire-level interop was a bigger game changer than API compatibility. I think the REST crowd would have to agree .

Five years ago, J2EE was still seen as "the answer" by the enterprise vendors. But even then, most Java programmers weren't J2EE programmers, and most programmers weren't Java programmers. And since then we have seen Spring and dependency-injection challenge the core concept of enterprise APIs, and Ruby, PHP, Python, F#, Scala, Erlang challenge Java. The only points of agreement are web-protocols - HTTP/REST, SMTP/email, XMPP and SOAP/WS-*. I hope we can soon add AMQP to the list as a high-performance reliable protocol.

July 31, 2008

Tom JordahlBlazeDS Documentation Update

The documentation team has posted an update of the BlazeDS documentation that includes all of the relevant content from the new LCDS 2.6 Developers Guide.

By the way, here is a gateway page to all of the LCDS 2.6 documentation, which has been reorganized to be much easier to read and use.

The HTML in Livedocs has not been updated yet, but a new PDF version is available.

See details on the Flex documentation blog here.

Nandana MihindukulasooriyaWhy can't we ship Apache Rampart as a standalone module all in one ?

This questions pops up time to time in the mailing lits, so thought of digging to the question and see why really it is not possible. I totally agree that it would really handy if we can ship Rampart and all it's dependencies in a single mar file so deploying Rampart will be just a matter of dropping that mar file in to the module directory of the repository. So let's see what are the problems we have ? I see two problems here. So let's see what's the first one and it is the critical one. It's related to how Apache Neethi, which the policy implementation that Axis2 uses, loads the assertion builders. So let's see how it works. It uses Service Provider Interfaces (SPI), to load assertion builders. In SPI, we have services which are normally interfaces or abstract classes which defines the some service and service providers which are the concrete implementations of that service. So Neethi uses SPI, to get the correct assertion builder for a given assertion. So in this case org.apache.neethi.builders.AssertionBuilder is the Service. So how do we configure service providers. Each module in Axis2 which deals with WS-Policy can provider assertion builders for their domain assertions using a configuration file with the same name “org.apache.neethi.builders.AssertionBuilder” and putting it to the META-INF/services directory of the relevant domain specific jar file. For example of you look at the org.apache.neethi.builders.AssertionBuilder file in the META-INF/services directory of the rampart-policy-x.x.jar, you can see that it lists a set of service providers which implements org.apache.neethi.builders.AssertionBuilder interface. This is same with Sandesha 2 policy jar file. So what Neethi does is creates a map of Assertion QNames to Assertion builder instances, using this static code block in the org.apache.neethi.AssertionBuilderFactory .

static {
AssertionBuilder builder;

for (Iterator providers = Service.providers(AssertionBuilder.class); providers
.hasNext();) {
builder = (AssertionBuilder) providers.next();

QName[] knownElements = builder.getKnownElements();
for (int i = 0; i knownElements.length; i++) {
registerBuilder(knownElements[i], builder);
}
}

registerBuilder(XML_ASSERTION_BUILDER, new XMLPrimitiveAssertionBuilder());
}


And Neethi doesn't use sun.misc.Service but uses it's own utility class, org.apache.neethi.util. Service to do this. So if we look at the Service class, it looks for org.apache.neethi.builders.AssertionBuilder files using the classloader of the org.apache.neethi.builders.AssertionBuilder.


ClassLoader cl = null;
try {
// cls is AssertionBuilder.class in our case
cl = cls.getClassLoader();
} catch (SecurityException se) {
// Ooops! can't get his class loader.
}
// Can always request your own class loader. But it might be 'null'.
if (cl == null) cl = Service.class.getClassLoader();
if (cl == null) cl = ClassLoader.getSystemClassLoader();


So here is the catchy part. It looks for the service provider configuration files using the classloader of AssertionBuilder class. So if we want our our service providers, that is domain specific assertion builders to be found they should be in the class path of the class loader of AssertionBuilder class which is in the Axis2 lib. So in this case, the Rampart jars which contains service provide r configurations files also need to go to in the Axis2 lib. AssertionBuilder must be able to load those service provider classes which are listed in the org.apache.neethi.builders.AssertionBuilder files. So until we solve this problem, having a standalone Rampart module is not possible.

Then the second problems is as we have two modules Rampart and Rahas, and if we we ship them as standalone modules we may have to ship all the rampart jars and dependency jars in both of those modules as those modules have their separate class paths when deployed in Axis2. Anyway this is not a blocker and may not be much of a problem.

When talking about this topic, some people tend to think that loading the password callback classes is also an issue here but is not. So the issue is service password callback handlers are packed in the service's archive (.aar) and the service has a separate class loader. But Rampart/WSS4J which lives in a separate module class loader needs to load these classes to get the passwords for various functions. But this not a problem because this explicitly handled by Rampart. So if we look at the code snippet that loads password callback handlers in org.apache.rampart.util.RampartUtil#getPasswordCB()
.

String cbHandlerClass = rpd.getRampartConfig().getPwCbClass();
ClassLoader classLoader = msgContext.getAxisService().getClassLoader();

log.debug("loading class : " + cbHandlerClass);

Class cbClass;
try {
cbClass = Loader.loadClass(classLoader, cbHandlerClass);
} catch (ClassNotFoundException e) {
throw new RampartException("cannotLoadPWCBClass",
new String[]{cbHandlerClass}, e);
}


So as you can see this is not a problem regarding this issue.

July 29, 2008

Steve LoughranLike a datacentre, only more than one of them.

Earlier this month, at the Apache-UK academia workshop, I was pushing Hadoop as something that mattered. In a not entirely unrelated event, HP, Yahoo! and Intel have just announced a Cloud Computing Test Bed, which will consist of 6+ datacentres, each for experimentation with cloud computing applications. Hadoop and applications on top of this are going to be a key part of this. But not the only things that run on it. It really is a a testbed, not just a hadoop-to-go system. Which means, if someone wants to do some OS fun, or play with completely new applications, they can ask for time on some of the machines.

This makes the test bed interesting in two ways. Firstly, Hadoop and the layers above it provide immediate value: map/reduce, data mining, stuff on top. Secondly, nobody is saying Hadoop-only. If someone wants to build a distributed object infrastructure on top of WS-ResourceTransfer (who would do that), then they are free to apply for test-bed time, alongside anyone else. This makes it profoundly different from, say the OGSA-approved grid fabrics, and gives it a bit of the flexibility of planet-lab. There's still lots of details to get sorted out about how getting access; the bias will be towards short-lived over long-life computation, and initially it will be the companies and the partner institutions that will be running code on the machines.

I certainly hope that alongside academic (including UK academic) and industrial applications, open source projects get time too -not just the Hadoop/HBase/Mahout + incubating layers, but things that do interesting work with shared datasets on top of the tools. Again, this is somewhere where some open-source/academic collaboration would be interesting.

Some press:

This is really exciting stuff. I'm not going to add any more on the topic right now, because I don't want to do anything that would upset the press teams of the various companies, and make anything resembling a forward looking statement. Certainly nothing I have posted should be interpreted as any form of commitment by myself or my employer. As usual.

July 28, 2008

Sam RubyUpdated Decimal Implementation

10

I’ve added decimal literals and support for both unary and binary operators on top of SpiderMonkey.  My approach is that when all arguments are Decimal, the results are Decimal; otherwise the precision is lost.  An example to make it clear:

js> 1.21  - 1.11
0.09999999999999987
js> 1.21  - 1.11m
0.09999999999999987
js> 1.21m - 1.11
0.09999999999999987
js> 1.21m - 1.11m
0.10

More details here.  Code here.  Mozilla tracking here.

July 26, 2008

Steve LoughranPluggable Hadoop

Tom White looks at how people are looking at extending Hadoop, including my little plan for a consistent lifecycle for hadoop services.

Although it will make subclassing easier, my real goal there is to make it possible to start, stop and ping these services. The fact that I've had to subclass the existing stuff today is because they don't have easy ways to start and stop them, and no liveness checks at all. With a unified base class and lifecycle, most of my subclassing hacks are unneeded. No, where I'm looking at doing interesting stuff is in Configuring Hadoop; being able to manage the stuff is a precursor

Looking at the other areas of work, I think scheduling will get the most interest from different people. Why? Because its where people like Platform Computing deliver value. It's not the APIs for grid computing, it's in distributing work to chosen machines. The current Job Scheduler works, but it is very simple. Every task worker node has a number of 'slots' -work is assigned to workers with spare slots. The scheduler is location aware, looking for the closest open slot to data, but there is no real examination of how much work a node is really doing, what the expected workload of the new job is (based on past experience), or anything resembling balanced scheduling between users. Over time, that's where there is going to be fun. Watch that space.

Steve LoughranHow to write untestable code

Nice list on the google testing blog on how to write 3v1l untestable code; a list of don'ts, most of which I agree with.

The one I don't is the no-utility-class rule, which doesn't hold in a framework (Java?) that doesn't let you patch in new methods into existing classes.

Paul FremantleMicrosoft sponsors Apache

Yep. I know its hard to believe! But then who would have believed Microsoft would have an Open Source vendor at a TechEd keynote.

Ted asks "what it will be like when the first MS project shows up at the Apache Incubator..."

I'm asking what it means when Oracle (through BEA), Yahoo, Google and IBM have Apache projects that they have incubated and use, HP and Microsoft are sponsors? It certainly means Marc is wrong about the relevancy of Apache.

July 25, 2008

Sam RubyNew ASF Platinum Sponsor

Sam Ramji: Microsoft is becoming a sponsor of the Apache Software Foundation (ASF).  This sponsorship will enable the ASF to pay administrators and other support staff so that ASF developers can focus on writing great software.

Thanks!

Ajith RanabahuMicrosoft and opensource

After reading the recent post from Sanjiva I was tempted to write this bit down. I've been debating similar points with my friends, specially Karthik who has become an avid Mac/OS-X fan over the last year or so. For the record I'm primarily a Linux user (in fact I'm using one of my Ubuntu boxes right now) but I do use windows (XP though) to run my iTunes and an occasional office product. So here are some of my observations.

1. Microsoft is a good technology company. If you look at the world with the researcher hat on, the best place to work right now is Microsoft research (MSR). They do really good research and come up with great technologies (not to mention the good pay :) In fact my personal experience in M$ during the interop in 2005 is a very positive one regarding the technology they have. Its not really a surprise since they have the best brains working for them). If you look at technologies like .Net and Silverlight, there are many merits over the other prevailing technologies. However M$ has a bad track record and known for their extreme focus only on their (winxx/MSSQL etc) platform and the public looks with skepticism when it comes to using their technologies. I've met Mac/Linux geeks that are interested in M$ technologies but are not getting into them simply because its from M$. However it is not a reason to discredit M$ of their technological perspective.

2. There are tendencies towards moving towards a more open environment. Port25 is a very good starting point. Open specification promise is a good gurantee. I see these as signs of blending in with the opensource culture that is blossoming. In fact one of the recent moves to offer facilities for certain opensource foundations to test their software on Winxx platforms (sadly of which I am not at liberty to discuss the details) is an indication of their realization of the strength of Opensource. However just as Sanjiva and others mentioned we are not gonna see an open windows or community MSSQL soon (or may be never!). Purely in my POV its unrealistic to assume so. Do we see any other big company (IBM/Oracle/Apple/Google) opensourcing their core products ? No, not even true for a company like Google whose motto is 'Don't be evil'!

3. Microsoft will always be a strong presence in almost all technological fronts. Contrary to the popular geek beliefs, I don't assume M$ to go bankrupt soon. Yeah, Apple is doing well but if you consider global sales, Apple is still miles behind. XBox is doing well (despite Wii beating it to the first place recently). Their enterprise / back office products are doing fairly ok. They have a ton of money in the bank and most of the best brains still work for them. These guys are not gonna be wiped out from existence just like that.

4. Their size makes them a prime target. I think of this as the case of the bright, big, nerdy kid in school. They constantly get teased just because they are noticeable. There are many cases where M$ has become the victim of attacks just because they are the most visible player (Note that they are no saints. There are cases where M$ pushed their own agenda. But at the end of the day it comes down to business and any other software company in the same place would have not gone a different way). For example in the Operating System space Apple gets a lot of attention and a lot of people would pick M$ as the bad guy if you put Apple and M$ side by side. Think about it. OS-X is as proprietary as Windows, only runs on Apple hardware (which you pay a fortune to get) and go to extremes when it comes to protecting their assets. Why are they not perceived as evil then ?

I am not white washing M$. But I believe they wouldn't just go away and a (apparently) healthy relationship is coming up with the Opensource world. They are already supporting Opensource software vendors, build on top of open protocols and start taking small steps towards living in harmony with the others. We should not jump into conclusions :)

Sam RubyOpen Web Foundation

Eran Hammer-Lahav: This morning at OSCON, David Recordon announced the creation of the Open Web Foundation. The Open Web Foundation is an attempt to create a home for community-driven specifications. Following the open source model similar to the Apache Software Foundation, the foundation is aimed at building a lightweight framework to help communities deal with the legal requirements necessary to create successful and widely adopted specification.

Having been involved with the ASF for some time, I’m concerned that a number of “reflexes” are initially out of alignment with this group.

Examples: Joining this group requires approval. and We shouldn’t start with an open specification for a DSL modem authentication protocol as I doubt we have the domain expertise to do a good job.

Key question: who is “we"?  Oversimplifying what the Apache Incubator does, it makes sure that a "podling” has a diverse and sustainable set of contributors, who are committed to do their development openly and collaboratively, and have the rights and desire to license all the necessary IPR under the terms specified by the Apache Software Foundation.

Verifying that such a group exists and meets these criteria does not generally require domain expertise.

If this group evolves to the point where it finds the right balance of enabling and getting out of the way, this foundation could be a very handy thing to have around.

Sanjiva WeerawaranaIts over the hump Tim .. give it a rest

Sigh. Tim Bray didn't get the memo: REST is now beyond the peak of the hype curve and is sliding down. Waay down.

Just because I can't resist: so Tim, REST does need tools now?? Funny how the world turns, eh? I thought you and the rest of the REST fanatics have argued violently saying how REST doesn't require tools, doesn't require WSDL or equivalent etc. etc.. I guess we will end up with REST-* before its all said and done with.

The other topic Tim touches on is how the world is now not all about Java. On that we agree .. except:
"Up until two years ago, if you were a serious programmer you wrote code in either Java or .Net,"
Er, dude, which planet have you been on? Two years?? PHP has been kicking Java's butt for 5+ years!!!

The multi-language boat sailed a LONG time ago and Sun (as usual, I should add) kept sticking its head in the sand waiting for it to blow over. Of course it didn't and it will not. Now that Sun has finally started recognizing that not everyone will love Java, I guess its time for the mouthpieces to speak up and try to spin it positively saying they did it at the right time. Sorry, you guys missed the boat. Badly.

In any case, Sun still doesn't get it. Neither does IBM. The JVM will not be the only runtime to run languages - while its cool to implement PHP like syntaxes on the JVM etc., you are going to have to learn to live in a world where not everything runs on the JVM and all of those crappy JSRs that have been done in the last 10+ years have absolutely no meaning. (In fact, most of the JSRs only apply to Java the language .. making them even more irrelevant).

Of course there are (and will be) some great languages on the JVM: Groovy, JRuby and more. However, even if JRuby performs better on the JVM than Ruby native (which is of course because the Ruby impl ain't great) that doesn't mean that that strategy will work for all. Seriously, try doing JErlang in that case.

The world is inherently heterogeneous, even in languages and language runtimes. There are 3 core platforms in existence today: C, JVM and .Net CLR. Every language runs on top of one of those. Sticking your head in the sand in only one of those will automatically limit the market you can address.

(Plug for Axis2 & WSO2.) This is exactly why when we started the Axis2 project back in August 2004, we intentionally stayed away from burning Java JSRs into the core of it. That's also why we explicitly made design decisions that could be realized in both Java and C. I actually always wanted to do a .Net version of Axis2 too, but never quite got around to it. The idea was to cover all the bases.

This is also why when we started WSO2 in August 2005 that we decided to invest heavily in building Axis2/C in addition to Axis2/Java. We now have coverage for Java, Javascript, Jython, Groovy, Grails, Spring, C, C++, PHP, Perl, Ruby. Python is coming and Erlang too hopefully soon.

Oh yeah we support both WS-* and RESTful services. However, they won't meet the RESTafarian fanatics like Tim Bray's coolaid drunkenness level of REST .. but if you want to do pragmatic work with services and support either or both of WS-* and REST then take a look at Apache Axis2 (Java & C), WSO2 WSAS, WSO2 Mashup Server, WSO2 WSF/{C,C++,PHP,Perl} etc...

Sanjiva WeerawaranaHow to handle SOA vendor consolidation

Paul Krill over at InfoWorld has written a great story on SOA vendor consolidation. He notes how even the mighty IBM has to deal with WSO2 because of open standards based interoperability that is inherent in SOA scenarios.

Later he lists the major SOA vendors .. and its great to see WSO2 listed there - and the only open source one to boot.

July 24, 2008

Sam RubyRuby 1.9: What to Expect

slides for OSCON 2008 presentation

A number of the members of the audience were more informed on the subject than I was (excellent!).

There was a vigorous discussion on the slide which talked about for...in not exactly paralleling .each in semantics, initially the audience was overwhelmingly in favor of providing feedback that they should be the same, but after some discussion the consensus was not clear.

Errata generated during the talk:

Steve LoughranTour de France -Galibier and Alpe d'Huez

For the past few years, I've found that even though HDD Video Recorders let you build up a backlog of Tour de France coverage you can watch later, by the time you get round to it news gets out. Not just news of who won, but often the fact that they got disqualified for drug abuse later. I'd be better off getting out the old VHS VCR and playing back recordings of the early nineties, where there was no test for EPO for the cyclists to fail.

So far this year I've been avoiding most TdF news, though people keep emailing me with "isn't it great about Mark Cavendish, now with X stage wins"...so no surprises there -though I've been avoiding determining which stages he has one, so they will still be a surprise. Incidentally, anyone in the UK and a copy of Silverlight installed on an XP VMware image (or indeed, a real machine), can watch the hour long Paul-and-Phil on ITV4 broadcasts.. If US/Canada folk can get these, they get a commentary without the football commentator interrupting with ignorant comments whenever they can.

What I have also managed to do is catch live the second Alpine Stage -Lauteret, Galibier, Telegraphe, Croix de Fer and Alpe d'Huez. I have done the first 3 of these passes, and it is good to see them again. What is more, the time zone differences were such that I got to watch this after the snorkeling round the coral reefs, while a storm that is related to the rain hitting the indian west coast battered our windows:

Tour de France -Galibiier and Alpe d'Huez

I caught the Galibier and telegraphe work and the Croix de Fer approach, then popped out for dinner, coming back for the final 21 hairpins of the Alpe d'Huez. And, without spoiling the result for anyone -what an excellent stage! I hope the winner stays in yellow to the end of the tour, as they have earned it!

PS: doesn't the Alpe d'Huez resort look butt-ugly from above. It reminds me of Avoriaz or La Plagne, and they suck too. I know the ski resort encourages the bike ride for summer trade, but if they want winter visitors, shouldn't they pay the helicopters not to show what the town actually looks like? Far better off skiing La Meije, IMO

Sanjiva WeerawaranaResponse to "Microsoft at OSCON"

Zack Urlocker has written a blog on his OpenSource blog at InfoWorld about Microsoft's participation at OSCON. Please read that first.

I started writing this as a comment on his blog but it was getting a bit too long .. so I thought I'll blog it directly:

Zack, conventional wisdom seems to be that Microsoft must do open source by releasing the source code for one of their cash cows. If you're a shareholder of MSFT, does that make sense at all?? They're making $60B an year and we expect them to open source any of that? No way.

Take IBM. We all give IBM a lot of credit for being a "good" open source player right?? Hmmmm. Really? Which product of theirs is open source? Compare with MSFT: Windows == mainframe (totally proprietary). Office == WebSphere family (totally proprietary) etc. etc.. Not a single major product of IBM's is open source. (I don't consider WebSphere Community Edition aka Apache Geronimo a serious play.) Should IBM open source DB2 or WebSphere or any of their other market successful products?? Hell no. Why should they; certainly their share holders aren't calling for it!

The way Microsoft can and should do open source is by (a) interoperating with open source stuff and enabling open source to run well on/with their products, and (b) by using open source to expand the markets they play in.

We (WSO2) are working with them closely on (a). For example, right now we have a joint booth with them at OSCON demonstrating WS-* interop between .Net, Java, PHP, Ruby, Perl and Spring. (Please do drop by and take a look!) In May we were part of a keynote speech by Bob Muglia (MSFT SVP Server & Tools) at TechEd ITPro where we showed an earlier version of that demo. That's the first time an open source company was part of a major MSFT keynote.

I'm not here to defend MSFT. Yes, they have DEFINITELY done all kinds of things to try to destroy the open source movement. IBM, on the other hand, has indeed helped in NUMEROUS ways with helping open source (esp. in market/technology segments where they were not players .. they're VERY smart). However, I think the conventional wisdom that MSFT can do more with/for open source only by "showing me the code" is wrong.

That's where (b) comes into play. The way I see MSFT entering open source is by buying one or more open source companies and entering into market segments they do not play in now. When? Who knows. Who? Who knows. Obvious candidates go from Redhat to Novell to Spring to a bunch of others. Why should they enter spaces they are not in right now? Because the enterprise space is inherently and permanently heterogeneous and if you want to eat bigger and bigger chunks of that market, the only way to do that is to play in multiple segments of that market. You will not succeed by trying to get Java developers to convert to .Net. Nor PHP ones. Nor mainframe ones.

There certainly could be a (c): open source one or more of their products. However, as anyone who has tried to open source a closed source products knows, it is REALLY difficult to make such an open source project succeed. First of all, getting legal clearance and scrubbing the code takes a long time (for example, someone told me that Sun decided to open source Solaris 5 years before they were finally able to do it .. no idea whether its true). Second, open source code is naturally modularized and better documented because there are geographically and temporally separated contributors from day 1 who communicate to each other thru such module boundaries. What that means is that it is VERY difficult to form a community around a complex piece of software because no one can easily "carve out a corner for themselves". Even building such complex software is hard and may require resources that a typical developer notebook can't deliver.

So even if the MSFT business were to decide that (c) made sense (and I really don't see why yet), the practical reality of getting the code out and making it work as a true community effort is going to be so hard that in the end they'd be holding a lemon of an open source project.

Thus, to me, the current MSFT strategy of doing (a) makes perfect sense. (b) will come when the time is right. Whoever will get bought out first will be making history.

July 23, 2008

Rajith Attapattu5 reasons why Distributed Systems are hard to program

Here are 5 reasons why I found distributed system are hard to program. This is not some sort of thorough analysis, but merely my observations in dealing with such systems. For completeness, here is the definition of “Distributed System” I used.
A distributed system contains of more than one process that runs as a single system. These processes can be on the same computer or multiple computers that are on a local area network or geographically distributed over a wide area network.

Without any further do here are the reasons in no particular order.

1. Difficulty in identifying and dealing with failures.
When communicating between processes failures can happen at many levels. Dealing with them is not trivial. Of course you rely on frameworks based on technologies like RMI, CORBA, COM, SOAP, AMQP, REST(is an architectural style not a standard) etc to handle these. But the fact remains that you still need to clearly think about these cases and handle these situations properly.

For example if we consider a simple interaction between two processes on different computers, the following failures can happen.

  • Failures that occur within the process that initiates the communication (sending the message or invoking the RPC call).
  • Failures between the time the process hands over the request to the OS and the OS writing it to the network.
  • Network failures between the time it takes to transmit the packets from one computer to the other.
  • Failures between the time the OS on the receiving end receives the packets and then handing it over to the recipient process.
  • Failures that occur when the recipient process tried to process the request/message.

Sometimes the framework you use, is unable to/may not report all these error cases. Sometimes when the error is reported, it may not contain enough information to figure out at which level the error occurred.
Did it reach the remote computer? if so how far up the stack did it go?. If the receiving process got the request or message did the error occur before or after the request/message was processed?
In some cases where idempotency is built into the the receiving application or the framework/protocol (ex a message client that detects duplicate messages, or doing an HTTP GET) a simple retry maybe ok. In some cases Idempotency and retrying maybe expensive or difficult to implement. In such cases careful thought needs to be given on how these different errors are identified and handled.

2. Achieving consistency in data across processes.
One of the hardest problems in programming distributed systems is achieving a consistent view of data across the processes. When one processes updates some data, you need to replicate them across the other processes, so if any other process decides to operate on the same set of data, then it is doing so on the most current copy.
Lets look at two examples.

Assume a global banking application for ABC bank. A customer goes to a branch in New York, US and deposits money to an account. A few moments later his relative in London, UK does a withdraw on that account. Due to latency there is obviously a time lag before the process in London, UK sees the updated amount in the account.

In an online trading system, a user in NY places an item for sale. The transaction is updated on the closest data center which is in Boston. A few moments later another user in LA is searching for the exact same item and is served off a data center in Phoenix. The user in LA may or may not see the item due to the latency involved in replicating the data across

For example 1 strong consistency is required, while for example 2, you could get away with weak consistency, for example by setting an SLA that says data is valid within a 5 min time window.
This is not an easy problem to solve and this area itself is a subject on its own. Wener Vogels wrote a nice peice on this called Eventually Consistent which is worth reading.
Of course there are specialized frameworks/libraries that can handle this for you. But still there is no escape for you and you pretty much need to have an understanding of the pros and cons of various approaches, failure modes etc.

3. Heterogeneous nature of the components involved in the system.
A distributed system may contain components written in a variety of languages deployed across machines with different architectures and operating systems. Needless to say that this poses certain challenges (especially integration, interoperability issues) when implementing the system. A whole range of standards/technologies were presented to solve these issues, including but not limited to CORBA, SOAP, AMQP, REST (is an architectural style not a standard) and RPC based frameworks like ICE, Thrift, Etch etc. Anyone who has worked with these technologies knows that neither of these are trivial to use nor provide a complete solution in every situation.

If anybody has read the recent posts by Steve Vinoski and the discussions around it would realize the issues/challenges surrounding RPC. The following paper discuss the impedance mismatch problems when working with IDL based systems. The issues with type systems and data formats are not limited to RPC only. When using a message oriented approach like SOAP (doc lit style) or AMQP you will end up tunneling data thats not supported by the protocol as a string or a sequence of bytes. When using REST you would need to represent your resource in a format the requesting application understands/supports, which maybe quite different from the native format.

Again not an easy issue to deal with no matter what technology or framework is used. As an architect/developer you need to understand these issues and deal with them accordingly.

4. Testing a distributed system is quite difficult.
This is arguably one of the hardest aspects of developing a distributed system. Verification of the behavior and impact of your code in the system is not easy.
There are many aspects that needs to be tested, and doing so before every checkin is not a fun task at all. Running some of these tests before every checkin is not practical. But its a good idea to run them nightly and some tests during the weekend. Here are some of the areas that needs to be tested (I plan to write another blog entry elaborating on the testing aspects).

  • Functionality testing (can be covered with well written unit testing)
  • Integration testing - you need to test the distributed system as a whole with all the components involved
  • Interoperability testing - this is crucial when heterogeneous components (different languages, OS) are involved, and is quite different from integration testing
  • TCK compliance - If your system is based on standards/specifications, then you need to ensure that you haven’t broken anything w.r.t compliance
  • Performance testing - to ensure that your changes haven’t accidentally caused a degradation in performance
  • Stress testing - to ensure that your checkin hasn’t accidentally caused any stability issues - ex increased chance of deadlocks when the load increases
  • Soak testing - to ensure that your checkin hasn’t caused any longevity issues - ex a memory leak thats manifested after a couple hours, days

Most often than not developers cut corners in their testing as running these tests are tedious and time consuming. Also these tests need to be run regularly to catch issues in a timely manner and the best way to tackle this issue is to automate as much testing as possible. There many options with continuous build systems like cruisecontrol or using a plain old cron job.
Functionality testing, TCK compliance, certain types of integration and interoperability tests can be run periodically.
In most organizations test machines are just lying around doing nothing during the night (unless around the clock testing is done with development centers in different time zones.). Instead of wasting computing cycles, you could automate test suites to run during the night. More time consuming integration and interoperability tests, performance, stress and soak testing can be done nightly, while more longer duration soak testing can be scheduled to run during the weekends.

While testing is a tough issue for any type of system, distributed systems have a lot more failure points which adds to the complexity.
Getting these tests right to cover these failure points and executing them needs a lot of careful thought and planning.

5. The technologies involved in distributed systems are not easy to understand .
Distributed system are not easy to understand. Neither are the myriad of technologies used in developing these systems.
Most folks find it difficult to grasp the concepts behind these technologies. If you look into the discussions and misconceptions surrounding REST you can understand what I am trying to get at. CORBA was not an easy spec to understand, so is WS-* or AMQP. While it is true that you don’t need to understand everything to develop using them, you still need at least a reasonable understanding to figure how to tackle some of the above mentioned issues. Frameworks based on these technologies are touted as the cure for these problems. Sure they could help, but it still does not shift the burden away from you.
To compound the issue all sorts of vendors keep touting their technology/framework as the next silver bullet. No matter what vendor you use, at the end of the day you are still responsible for getting it right. And it is not an easy task. You need to face the reality that distributed systems are hard and that you cannot hide every complexity behind some framework.

Eran ChinthakaWhat is eScience ?

(This will be helpful for me to explain my friends what I am working on currently ;) )

Disclaimer : this will be a basic introduction and might not be sophisticated enough to Achilles in the field.

In simple terms, eScience is where computer scientists blends with scientists from other science fields to solve their problems efficiently. In my view there are two things that are being referred to as eScience these days.

1. Computer scientists apply their algorithms and knowledge on to other science fields. For example, one could use algorithms and methods like neural nets, machine learning, etc., to medical field to efficiently device solutions to those areas.
Even though most of people don't see this as part of eScience, having being to a talk from David Heckerman, I also agree with him.

2. There are algorithms that require large amount of computational power and time to compute something or they act on large amount of data. For these algorithms to work or these tera bytes of data to be mined, one might need the help of super computers.

- Handling these large amount of data
- executing those algorithms on these data
- enabling scientists to work these data, thru GUIs or workflow engines etc., is also regarded as eScience.

This is what emphasized by most people and wikipedia as well.

I think I am also more in to the second area, so I will explain a bit more on that.

Think about the following scenario, related to meteorology, to understand the use case.

A country might have a large number of weather stations reporting various weather conditions to a central location. In case of US, IIRC, there are about 144 weather stations. Each weather station sends data, say once in a hour. If the size of a file sent by each weather station is about, say 1GB (this value will depend on the resolution of measurements), then we will get about 150GB per hour. There are algorithms to go through this data and mine them to find out interesting weather stations. For example, one algorithm will find out, say a set of storms using those data. Since the first phase will act on these data separately, there has to be another algorithm to aggregate the results. If first algorithms shows 5 storms, it can be few of them are related to the same one. Likewise there are different algorithms that can be run on top of this data.
Scientists can either run their algorithms on these data alone, or they can define workflows to run on these data. For example, they can design a workflow which will
  1. first mine these data, find interesting conditions
  2. cluster them to identify unique conditions
  3. talk to individual weather stations to get more data, if needed
  4. come up with a scenario explaining the current conditions
  5. predict on the path of the storm or behaviour
Since all these have to be carried out in a timely manner (You don't want to get today's weather forecast tomorrow, right ;) ), and the data sets involved are large, it is required to use high performance computers for these .

To peform the above mentioned tasks, there has to be some infrastructure which can enable the users to
  1. design, execute, monitor workflows
  2. perform data movements from/to computing resources. These movements will not be easy as it will not only invlove large amounts of data, but also involves working with super computers, data centers, etc.,
  3. schedule and monitor jobs in high performance computing environments like clusters, grids, etc.,
This whole environment can be regarded as an eScience environment. This is just one examlpe and there are lots of problems like this in bio-science, neuro-science, aerospace, etc.,

July 22, 2008

Tom JordahlAIR Data Synchronization via LiveCycle Data Services ES 2.6

This is a nice article on the offline sync feature of LiveCycle Data Services 2.6 - AIR Data Synchronization via LiveCycle Data Services ES 2.6. It was written by John C. Bland II.

I worked on this feature for LCDS 2.6 (which by the way just got released last week). The SQLite DB built in to AIR is really nice to work with and the Actionscript APIs for using it were nicely designed by Jason, one of my coworkers here at Adobe.

I think the offline feature is pretty neat. There is still lots of room for improvement, but the basics that we do have are pretty powerful. We are in the planning stages for the next LCDS release and I expect that improving our offline story, particularly for AIR, will be on the list.

Check it out!

Deepal JayasingheFeel the taste of Rules with Axis2 – Rule Services

As I always tell , Axis2 architecture is so flexible , so you can do almost anything with Axis2. We have a number of extension for Axis2, such as

  • Data base extension
  • JavaScript extension
  • JRuby extension
  • Jython extension
  • Shell script extension and so on

Not stopping from there , we recently add one more extension to Axis2. Which is deploying “Rule services” in Axis2. We support most of the Rule engine , and you can configure your rule services to the rule engine you want and deploy that in Axis2 or WSO2 WSAS. As an example you can deploy Drule with this extension.

We have created a demo and hosted that in my home directory , and which has all the instruction you need to try the service. Try that and give us the feedback so that we can improve that and build complete Rule Service extension on Axis2.


Demo link : http://ww2.wso2.org/~deepal/drule/

July 21, 2008

Dan DiephouseNew Mule Performance Benchmark: Yup, we come out on top.

WSO2 has felt the need over the past few months to make many false claims about Mule’s performance. For instance:

Mule CE 2.0.1 couldn’t handle the cases where we used a concurrency level of 80; while other ESB’s scaled to support to over 2500 concurrent connections. This was after tuning the maximum active thread count to 100 from its default value, which limited Mule to a very few concurrent connections.

I ran their benchmarks. Sure enough, with their configuration, Mule performance was crappy. There were a couple fatal flaws with their benchmark though:

  • It used the stock HTTP transport instead of the Jetty transport which is NIO based. Swtiching fixed concurrency issues.
  • It turns out there is a bug/feature with Linux pre 2.6.17 that requires you to turn on tcpNoDelay switch with Mule. This affects performance on Linux based systems significantly for many of the tests—up to 200-300% differences were noted. In essence this controls whether or not the tcp message is sent before the buffer is full. Because the number of concurrent users is low in a lot of tests, the system is operating far under 100% load. This means it takes longer for a buffer to fill up and hence longer for the message to send.

Results

We released a paper with pretty graphs. Here are the relavent conclusions:

With a proper configuration Mule was able to process many more transactions per second than WSO2’s ESB in all three of their scenarios at almost every load level. Mule was on average 28% faster for [proxying HTTP endpoints], 77% faster for [XPath based] content based routing, and 286% faster for [XSLT] transformations. The only tests where Mule did not exceed WSO2 were with small XML messages and very light loads. Here the difference was less than 2% and is not statistically significant.

My hunch is that we can also beat the proprietary ESB in many scenarios as well if the system is properly tuned.

Content Based Routing

While I was looking into this I decided we might as well not just beat them, but significantly widen the lead. You may remember a while back I announced SXC, an XML parser compiler. It has a streaming XPath engine. Mule now supports it and it out performs anything else out there by a wide margin. For small messages (0.5K), we were up to 25% faster. For medium sized messages (5K) we were 200-300% faster with large loads. Take out all the HTTP overhead and I think we can safely assume that SXC is about 10x faster than anything else.

On the other hand we have AXIOM + Jaxen. Jaxen is fundamentally a DOM based. Even though AXIOM is a “streaming DOM”, Jaxen is very often going to trigger a full load of the document into memory. Not to mention SXC actually compiles the whole XPath expression down to a series of Java functions/statements to the most optimized form possible.

(Surely someone will object and say that SXC doesn’t support all of XPath. Yes, that is true. However, in that case you can just use the Jaxen routing filter and then performance is equal. But rarely do you route on such complicated expressions. If SXC is missing something, file a JIRA and I’ll try to add it.)

In addition to all this goodness, I get the added satisfaction of knowing that to equal our performance WSO2 will have to adopt code that I’ve written (SXC) or write something like it from scratch, which I would consider quite funny.

Deepal JayasingheBeauty and power of JavaScript – WSO2 Mashup server

WSO2 Mashup server is an Web services application server which has tuned to deploy JavaScripts as web services. Which is also build on WSO2 WSAS (which is in fact built on Apache Axis2). In addition to the WSAS , Mashup sever also uses WSO2 Registry.

With WSO2 Mashup sever we can deploy JS as Web services , as well as invoke any service from JS client. In addition to this it has a number of cool features as well.

WSO2 Mashup team did their 1.5 release recently , you can try that out. It is totally free and release under Apache License.



==================================================================

The WSO2 Mashup Server is a powerful yet simple and quick way to tailor Web-based information to the personal needs of individuals and organizations. It has been
released under the Apache Software License 2.0.

This release can be downloaded from http://wso2.org/projects/mashup

WSO2 Mashup Server 1.5 - Release Note - 21st July 2008
======================================================================
"Create, deploy, and consume Web services Mashups in the simplest fashion"

The WSO2 Mashup Server is a powerful yet simple and quick way to tailor
Web-based information to the personal needs of individuals and organizations.
It is a platform for acquiring data from a variety of sources including
Web Services, HTML pages, feeds and data sources, and process and combine it
with other data using JavaScript with E4X XML extensions. The result is then
exposed as a new Web service with rich metadata and artifacts to simplify the
creation of rich user interfaces.

The WSO2 Mashup Server will form the backbone of a become an ecosystem of
community-developed services that will broaden the palette of capabilities
for mashups and distributed applications.

WSO2 Mashup Server is released under the Apache License v2.0

Check out the project home page at http://www.wso2.org/projects/mashup for
additional information.

--------------------------
Features List
==========================
* Hosting of mashup services written using JavaScript with E4X XML
extension
- Simple file based deployment model
* JavaScript annotations to configure the deployed services
* Auto generation of metadata and runtime resources for the deployed
mashups
- JavaScript stubs that simplify client access to the mashup service
- TryIt functionality to invoke the mashup service through a web
browser
- WSDL 1.1/WSDL 2.0/XSD documents to describe the mashup service
- API documentation
* Ability to bundle a custom user interface for the mashups
* Many useful Javascript Host objects that can be used when writing mashups
- WSRequest: invoke Web services from mashup services
- File: File storage/manipulation functionality
- System: Set of system specific utility functions
- Session: Ability to share objects across different service
invocations
- Scraper: Extract data from HTML pages and present in XML format
- APPClient: Atom Publishing Protocol client to retrieve/publish Atom
feeds with APP servers
- Feed: A generic set of host objects to transparently read and
create Atom
and RSS feeds
- Request: Ability get information regarding a request received
* Support for recurring and longer-running tasks
* Support for service lifecycles
* Ability to secure hosted mashups using a set of commonly used security
scenarios
* Management console to easily manage the mashups
* Simple sharing of deployed mashups with other WSO2 Mashup Servers
* Mashup sharing community portal (http://mooshup.com) to share and host
your
mashups


--------------------------
New In This Release
==========================
* Request object
* Ability to secure hosted mashups using a set of commonly used security
scenarios
* Ability to call secured services using the WSRequest host object
* Integrated Data Services Support (expose data locked up in DataBases,
Excel spreadsheets and
CSV files with ease)
* OpenID login support
* Apache Shindig powered, Google compatible, per-user Dashboard and
browser based editor support
for developing gadgets for hosted mashups (http://wso2.org/library/3813).

-------------------------
Known Issues
=========================
* Management Console was tested only on IE 6/7 & Firefox 1.5/2.0/3.0.
* Inter-service dependencies using the dynamically generated stubs may
give
deployment time errors. Workaround would be to save a local copy of the
stub
in to the dependent service.
* JSON support lacks try-it support
* Mashup editor will convert < and > characters to
while
saving the code in the
server. This might result in malformed xml. Using these special
characters with caution is adviced.
Refer http://wso2.org/jira/browse/MASHUP-607.
* Built-in samples cannot be secured - the built-in "sample" user does
not have a keystore associated with it
(system services use the keystore of the primary account.)

---------------------------------
Future Directions
=================================
* Improved tooling support.
* An expanded toolkit of generic building-block services.
* Deep registry integration including governance, rollback, dependency
analysis, etc.
* Lots more cool stuff.

------------------------
Reporting Problems
========================

Issues can be reported using the public JIRA available at
https://wso2.org/jira/browse/MASHUP


------------------------
Contact us
========================

WSO2 Mashup Server developers can be contacted via mailing lists:
For Users: mashup-user@wso2.org
For Developers: mashup-dev@wso2.org
For details on subscriptions: http://www.wso2.org/projects/mashup#mail

Questions can also be raised in this forum: http://www.wso2.org/forum/226

Keith ChapmanWSO2 Mashup Server 1.5 released

Its been a couple of busy weeks and the effort is worth it. We've been busy working on the Mashup Server 1.5 release which has a bunch of new features.

The following are some of the new features in this release,
  1. Integrated Data Services Support - Users can now create Data Services using the Mashup Server itself. Data Services makes it trivial to expose data locked up in databases, csv files or excel spreadsheets.
  2. Ability to Secure Mashups - Users can now secure mashups running on the Mashup Server by the click of a button. The Mashup Server ships with 15 most commonly used security scenarios which are based on WS-Security. This allows users to control access to their mashups.
  3. Ability to call Secure Services with ease - Calling secure services has never been this easy. Users can call secured services using the Mashup Server by just writing a couple of lines of JavaScript code.
  4. Support for gadgets - Any mashup running on the Mashup Server can be exposed as a google gadget which can be hosted on the Mashup Server itself or igoogle.
  5. Personalized Dashboard - The Mashup Server can also act as a personalized dashboard. More details on that can be found here.
  6. OpenID login support - In the previous release we introduced infocard based login support and in this release we've gone even further and added openID based login support which is powered by the WSO2 Identity Solution.
  7. Service Lifecycle support - Service Lifecycle helps manage life cycle of a particular mashup you deploy in the WSO2 Mashup Server.
There are just a few of the features we've added to this new release. For full details please refer the release note. Stay tuned for more details and usages of these features. We'll be upgrading mooshup to use this new release in the coming days. Until then if you wanna try the WSO2 Mashup Server please feel free to download it cause its available freely under the Apache License.

Sanjiva WeerawaranaMicrosoft didn't invent SOAP!

Wow! Here I've been thinking for nearly the last 10 years that Microsoft invented SOAP. Duh.

Not only that, SOAP, it turns out, was not invented in 1999. It was actually first invented in 1953. (I'm of course not talking about soap, in which case the invention date is a few years prior to that ;-).)

I was searching for lists of programming languages to give to my programming languages class students to do papers on and I found that the IBM 650 assembly language was called Symbolic Optimal Assembly Program ... or SOAP!

Deepal JayasingheWSO2 Web Services Framework for Perl 1.1 Released

WSO2 Web Services Framework for PHP (WSO2 WSF/Perl), is an open source,
enterprise grade, Perl extension for providing and consuming Web
Services in Perl. WSO2 WSF/Perl is a complete solution for consuming
Web services and is the only Perl extension with the widest range of
WS-* specification implementations. It's Key features include, clients
with WS-Security support, binary attachments with MTOM.

You can download the release from:
http://wso2.org/downloads/wsf/perl

Project home page:
http://wso2.org/projects/wsf/perl


------------
Key Features
============

1. Client API to consume Web services
* WSMessage class to handle message level options
* WSClient class with both one way and two way service invocation
support

2. Attachments with MTOM
* Binary optimized
* Non-optimized (Base64 binary)

3. WS-Addressing
* Version 1.0
* Submission

4. WS-Security
* UsernameToken and Timestamp
* Encryption
* Signing
* WS-SecurityPolicy based configuration

5. WS-Reliable Messaging
* Single channel two way reliable messaging

6. REST Support
* Expose a single service script both as SOAP and REST service


-------------------
Reporting Problems
===================
Issues can be reported using the public JIRA available at:
https://wso2.org/jira/browse/WSFPERL

Apache SynapseSynapse artifacts are OSGi compliant

Synapse artifact jar files are now OSGi compliant...!!

This means that you can now use the synapse artifact jar files within an OSGi container, but Synapse standalone server is not yet an OSGi container.

You may read more on this in this blog

July 20, 2008

Afkham AzeezThe first ever product release from WSO2!



A picture with a lot of historical value... This picture was taken soon after the very first product release from WSO2; WSO2 Tungsten 1.0-alpha (now known as WSO2 WSAS). This was the very first team that worked on WSAS. Only 2 members from this initial team still remain in WSO2. The rest have left to pursue higher studies. This post is a tribute to all those former team members.

Nandana MihindukulasooriyaInternet & Privacy : Can a plain old web site decide whether I am a male or a female ?

Do we have any privacy on Internet? For example, when I am searching something on Google or reading emails on my gmail account, obviously Google reads them too. Otherwise how can Google put advertisements most relevant to the content of my emails on the right hand side under sponsored links? So at the end of the day, Google knows what my interests are , who I am dealing with , what my greatest fears are and almost everything about me. Let’s wish that Google will always honor their motto, “Don't be evil”. Anyway let’s forget Google for a moment.
When I was in DZone, I came across an interesting post about a widget which tries to determine whether you are a male or a female according to your browsing history. I know what you will say, “What? You gave it your browser history?“ . No, I didn’t, a simple javascript just stole that information from me. So how did it work. The script is called Social History. Idea of social history script is pretty simple. It has a set of links of social sites such as Del.icio.us, Digg, Facebook, Reddit, Technorati, Slashdot, etc. etc and it decides whether I have visited those links based on their css style (the color of the link). So Aza Raskin is trying to use this to show bookmark links in an optimal way. Rather than having a static set of bookmark links like I have below under each blog post, he is suggesting to present bookmarks links to sites which the reader is actually using which can be found using his script, Pretty neat idea :).
Now, Mike Nolet has gone one step further and developed a widget that tries to determine your gender based on your browser history using a simple algorithm with a modified version of Social History script. It uses US top 10K sites for this. You can try out the widget here. For me, it correctly determined I am a male with a probability 97% even though it has not worked for some people, so you better try and find out.

You may also find following video interesting about today's Internet.

July 19, 2008

Ajith RanabahuMy first car in US !

After about 2 years living in US I got my first car here (I've had cars before but this is my first in US). Its a Honda Accord and the pictures will tell the story :) Bunch of thanks goes to Meena and Karthik for helping me out in my car hunt.





Eran ChinthakaPlaces to visit in Washington State - Mt St Helens

Location : Johnston Ridge Observatory, At the end of Spirit Lake Memorial Highway, WA

Directions : Google Maps, About 3 hrs from Bellevue, WA

For GPS : 46.276258,-122.216721

Link :
en.wikipedia.org/wiki/Mount_St._Helens

This was one of the interesting trips I went, with my family. The road to Mt St Helens was full of fascinating scenaries.
After we exit from I-5, the road goes through a small town and after that the road will be full of sharp turns. At one time there was a sign saying, it was the last place to get gas. It was 37 files from that point, but I didn't realize we will be gaining elevation and my car will have to do extra work. (Thanks to Corolla's fuel efficiency I didn't run out of gas :) )
There are couple of view points on the way and most of them were gorgeous. There was one place where you can see the path of mud and lava flow.
When we got to Johnston Ridge Visitor Center, the view was great. Since it was a sunny day we could see the whole mountain without any trouble. There are couple of trails lead by some rangers and one of them was going towards spirit lake. Visitor center also had some movies being played inside a theater.

This is a combination of three photos, showing the mighty Mt St Helens and the living crater.


Mt St Helens and the lava and mud flow path



360' view around the Mt St Helens area. If you look at the surrounding mountains, you can still see some burnt trees


Charitha KankanamgeHow to use Axis2 codegen ANT task

Apache Axis2 code generator tool provides a very useful custom ANT task. All of the command line code generation options are available with the ANT task as well.
Lets see how a simple client side code generation is done using the ANT task.

Pre-requisites:
Install Apache Axis2 -1.3 or higher
Install Apache ANT-1.7 or higher

Step 1

Create a directory and start to create a build.xml inside that as given below. (eg:- C:\temp\build.xml)

<project name="CodegenExample" default="codegen" basedir=".">

<path id="axis2.classpath">
<fileset dir="C:\axis2\axis2-1.4\lib">
<include name="**/*.jar" />
</fileset>
</path>

<target name="codegen">

<taskdef name="axis2-wsdl2java"
classname="org.apache.axis2.tool.ant.AntCodegenTask"
classpathref="axis2.classpath"/>

<axis2-wsdl2java
wsdlfilename="C:\test\your.wsdl"
output="C:\output" />
</target>

</project>

If you are familiar with ANT, you should be able to understand this simple build script easily. We use the path referenced as "axis2.classpath" to add Axis2 library jars which are placed at AXIS2_HOME/lib (In our example, C:\Axis2\Axis2-1.4\lib)

Axis2 codegen ant task is implemented by the org.apache.axis2.tool.ant.AntCodegenTask class. Therefore we refer to that inside a taskdef as given in "taskdef name="axis2-wsdl2java"

wsdlfilename attribute is equivalent to the -uri option in wsdl2 java command line tool and output is similar to -o option.

Replace the values of wsdlfilename according to your wsdl location.

Step 2
Open a command prompt and go to the directory where you saved the above build.xml.
Type 'ant'

The generated stub classes will be saved in the specified out put directory.

July 18, 2008

Sanjiva WeerawaranaConnectivity technology confluence: GSM, 3G, Wifi, IP

Sometime ago I bought a Blackberry Curve 8320 from T-Mobile in the US. I needed to have a US phone number with me but at the same time I hate to pay the $3-4/minute roaming rates that T-Mobile (and everyone else) charges when I'm out of the US. In addition the Wifi support, this particular model has a feature called UMA - Unlicensed Mobile Access - basically it allows the cellular call to be routed via the Wifi connection over the Internet. That basically means that I can have a US number at home and in my office in Sri Lanka and pay nothing extra for the call. (In fact the call is actually free - its part of a flat rate service you buy from T-Mobile.)

Anyway, right now I'm in a location where there's no 3G. However, my wife has a 3G connection from Mobitel using a Huawei E220 HSDPA USB modem that's connected to her laptop. I also have a pocket router (a D-Link DWL-G730AP) I always carry around with me. She also runs Ubuntu on her machine - so I set up her machine to do IP forwarding between the 3G connection (which is a USB device) and the wired ethernet connection to the router. So we have our own little wifi hotspot .. my laptop (from which I'm writing this blog) is connected via the wifi router thru her machine via 3G to the Internet.

Ok that bit is easy. The cool thing is my cell phone also is now connected thru that .. that means right now if someone calls my US cell phone number, its going thru that person's network (cellular or otherwise), to T-Mobile and then over the Internet via Mobitel's IP network via 3G to my wife's laptop then over wired ethernet to my router and then wirelessly to my cell phone. In the process the packets would've survived 2 levels of NATing (once by my router, once by the laptop).

Not bad, eh?

Sam RubyLife after Bug Tracking Systems

Avery Pennarun: The git developers don’t track bugs. If you find a bug, you can write about it on the mailing list. You might get flamed. And then probably someone will ask you to fix it yourself and send in a patch.  This is unlike almost all other open source projects.

Sometimes ideas take time to percolate.  When I first saw Avery’s post, it didn’t quite sink in.

When I started playing with hg, I noticed that I was applying a different style of development than I previously had done.  One that I felt more comfortable with.  And I thought again about Avery’s post.

And when I came across Chad Wooley’s comment: But using your SCM as a messaging platform?  Come on, that’s taking the social networking thing too far...  I pray that I never see the official Twitter channel for an open source project I care about, because I ain’t going there...; I once again thought about Avery’s post.

It now occurs to me that not all projects need bug tracking systems.  In fact, for some projects, not having a bug tracking system may very well be a feature.  In particular, if the bug tracking system on your project is the place where feedback goes to die, you might be better served not having one.  But if you do decide to go this way, you would be well served to consider one of the various DVCS systems out there, like bzr, hg, and git.

Nandana MihindukulasooriyaOpenID, Phishing & PAPE, Are we there yet ?

When I get to know how OpenID works, I was really impressed with the idea. It is pretty simple and straight forward compared to WS – Sec* stuff that I am messing up with. And it seems OpenID is becoming the hype and sometimes we tend to think of it as a silver bullet. (At least I thought). OpenID is cool as a SSO solution. But what about phishing. Does it prevent phishing ? The answer is no. In fact, it seems to make phishermens life easy by providing him a new way of driving the fish in to the nets. Why ? Because the way OpenID works the phishing site gets the control of redirecting the user to impersonating openID provider without much suspicion and it can find enough information about your openID provider to automate this process.
If you want see how this really happens, you can try out OpenID Phishing demo . And if you don't want to try it out Mike Jones has illustrated how OpenID Phishing demo works in his blog post "Gone Phishing". Stefan Brands also summarizes security issues of OpenID in his post “The problem(s) with OpenID”. And further, Ben Laurie’s describes this problem in more detail in his post "OpenID: Phishing Heaven" . In response, Simon Willison suggests how OpenID providers can help to reduce the risk of phishing. The idea is to make users directly go to the OpenID provider without redirecting them or making them follow links. According to Simon, “Instead of displaying the login form directly, providers should show a page that looks something like this: To log in, please navigate to login.example.com. The page your are currently viewing should contain no links; if there are any links or this text is changed in any way you may become a victim of online identity theft.”. He also suggests that OpenID provider URLs should be short, distinctive and memorable to make this effective. Yes, most of people agree that the best solution to prevent phishing is to educate the users but then again is this really possible ? Will some ordinary person will remember this if he is forwarded impersonating web sites which will directly offer a login screen or a link. Will someone who don't care a thing about what is on address bar will notice that is is not http://myopenid.com ?
One way of doing this is OpenID providers forcing users to use bookmark to login to OpenID provider. My OpenID's SafeSignIn is one such solution. But if someone impersonating the OpenID provider puts up a nice message saying as a new feature now you can login directly without using the bookmark how many people will fall in to that. Another solution is to use some pre configured images or icons , so that only the real provider can present you with the image/icon you chose and if you don't see the image/icon you can notice that you have landed on a spoofed site. Yahoo Sign in Seal and My OpenID's Personal Icon are two such solutions. But again, this depends on how much user is aware of these features. VeriSign's OpenID SeatBelt Plugin is another approach taken to prevent phishing. This plugin has an “Enable Phish Detection” option and when it is enabled, it tries detect phishing attempts when we are redirected to OpenID providers and always redirect us to the legitimate OpenID provider. Another solution is to use OpenID with Infocards.Kim Cameron talks on how to prevent phishing attacks with Infocard in detail in his blog post "Integrating OpenID and Infocard" .There are seems to be many other custom efforts to avoid phishing attacks but OpenID seems to be moving to a standard solution.
OpenID Provider Authentication Policy Extension (PAPE) specification tries to solve this problem by enabling OpenID relying parties to request that a phishing-resistant authentication method be used by the OpenID provider and for providers to inform relying parties whether a phishing-resistant authentication method was used. So if the OpenID provider doesn't authenticate the user in a phishing resistant way, OpenID provider should let the relying party know that it didn't use phishing resistant authentication so the relying party can decide what to do. But is this completely bullet proof ? This only guarantees that OpenID provider used a phishing resistant authentication this time as replying party asked so and it doesn't necessarily mean that it always used a phishing resistant authentication. What if some phisherman, impersonated a relying party and user has already become a victim of a phishing attack. Then when the legitimate relying party asks the open id provider to do the authentication in a phishing resistant manner, still the phisherman can succeed as he has already got the necessary information.
So it seems, protecting an average user from phishing only using the technology (without educating him with security concerns ) is a pretty hard thing. And yeah, we have to agree it is an inherent problem and not a problem of OpenID itself. So will we be able to get rid of phishing without users support just using the technology ? May be we will, someday. Who knows ....

July 17, 2008

Deepal JayasingheScripting support with Axis2

As we all know Apache Axis2 is a Java based Web service framework. In addition to that Axis2 is becoming as the de facto Java based Web service framework. Which is obvious when we look at the number of daily downloads as well as a number of companies who use Axis2. It took about four years to come to this position with great support from the community.

Now we can find a number of scripting languages which run on JVM. If a scripting language is running on the JVM then we can easily write scripting extension to Axis2. At the moment Axis2 has scripting extensions for;

Meaning of language extension is , one can deploy scripting services in Axis2 as well as one can use scripting language to invoke or access a service deploy in anywhere.

With the Axis2 architecture , we can easily plug a new language extension. It is just a matter of writing few components and registry in Axis2.

  • Deployer – to process the scripting file and create a Web service from that
  • Schema generator – Generating schema from the scripting class , for example if we are deploying a JS file , then generate corresponding WSDL
  • Message Receiver – when a message receive for that particular service , it will first come to the message receiver and that will invoke the scripting class and send the response if any.
Registering an extension in Axis2 is just a matter of adding your custom deployer in to axis2.xml.

Afkham AzeezYou can check-out any time you like, But you can never leave!

Today there was a farewell party for Deepal, Ruchith, Saminda, Sanka, Sandakith, Dinesh, Diluka, Suran & Chandima. Most of them are leaving for grad school to pursue masters degrees & doctoral studies. It was a sad day since these are some of the people who helped shape WSO2 from a technical perspective as well as its culture. They were also some of my close friends, some of whom I've known throughout my career. Sanjiva, in his farewell address mentioned that WSO2 is like Hotel California; You can check-out any time you like, But you can never leave! Indeed, once a WSO2er, always a WSO2er. WSO2 is such a unique company not only taking into consideration the technological aspects, but also the environment & culture. Anybody who had worked at WSO2 would agree that it was a unique experience & opportunity. I'd like to wish all of these guys the best of luck & wish that they come back to work at this great place after completing their studies.

July 16, 2008

Steve LoughranPorts in Use

Ruwan Linton covers the process used to get an IANA-assigned port for Apache Synapse.

I don't think I've ever bothered to talk to IANA for a port for any of my applications.

  1. It's a hint. If someone is using your port -even if it's IANA assigned- you have to be the one to deal with it. So your port must always be a configurable option.
  2. A lot of ops teams like to change ports around, to add a bit of obscurity to the game. There's no need to run SSH over port 22, for example, and doing so on remotely visible machines just increases your server load as machines around the world try and guess the password for likely accounts.
  3. The real ports you have to avoid are those used by the trojans, by Storm (port 7871 BTW) and other worms. Because the security scanners will throw a wobbly when any scanned machine has those ports open; its taken as a sign of being 0wned. And of course, the malware authors never bother to go through IANA to pick a port.

One thing that flexible applications can do is not require a specific port. This is what the portmapper daemon can do. It's a shame that this tool/binding protocol has fallen into effective disuse.

July 15, 2008

Sam RubyStalled Tickets

Joseph Scott: we can definitely use more people looking at the XML-RPC and AtomPub code.

My experience matches Jeff’s, namely that post 2.3; contributions of time in terms of showing up on the IRC channel; producing and commenting on both bug and feature requests; and in terms producing actual patches, rarely produces the desired result.  An en example, this ticket was explicitly opened based on a request from Joeseph in order to obtain feedback, and to date it has received none.

That’s fine — I for one certainly have plenty of other places to focus my attention — but if the WP team wants more people looking at ares such as XHTML, Atom and/or AtomPub code, IMHO there needs to be a person with commit access to the codebase who is actively engaged in facilitating these efforts.

Sam RubyTracking Towards Decimal Support in Firefox

Bug 445178 (decimal) – Implement Decimal Support

Thanks John!

Update: Downloadable standalone SpiderMonkey executables for Darwin, Linux, and Windows.

July 14, 2008

Deepal JayasingheSpring Web services and Axis2

As you know Axis2 is a Web service framework which has support many things. It has support for scripting languages , it has support for data services and it has support for EJB , Corba and etc. In addition that since a long time it has support for Spring as well. With that you can deploy Spring bean as Web services in Axis2. Yes I agree it is yet another way of getting the thing done. I also realized that is not enough for spring developers. They need everything works on spring.

To solve that in WSO2 we came up with a solution where we have integrated Axis2 into Spring. When doing this we have convert all the axis2 configurations files into bean descriptors , for example we came up with a set of beans for axis2.xml. With this we have integrated Axis2 smoothly into Spring. After thing anyone can easily expose a bean as a Web service. And get the power of all the other WS* support , such as security , reliability etc. , above all you can get the power of Axis2 while you are in spring container.


With this approach you can make a bean into a Web service just using following line of codes



<bean id="services" class="org.wso2.spring.ws.WebServices">
<property name="services">
<list>
<bean id="helloService" class="org.wso2.spring.ws.SpringWebService">
<property name="serviceBean" ref="helloworld"></property>
<property name="serviceName" value="helloWorldService"></property>
</bean>
</list>
</property>
</bean>

You can read more about Spring support from the following links

WSO2 Web Services Framework for Spring

Hello World with WSO2 WSF/Spring

Deepal JayasingheMultiple source directories with maven2

Without any doubt I can tell that Maven and Maven2 are very powerful project management tool, specially very useful for project building.

As I remember correct in maven1 it had a way to add multiple source directories , however when I switch to maven2 , I found that it does not have support for multiple source directories by default. Recently I got the requirement of adding multiple source directories for WSO2 Registry sample module. That module has few sub directories and I do not need to treat them as module. What I wanted was to add them as source directories in the sample module. So when I do the googling I found a very cool maven pluging called “build-helper-maven-plugin” , which helps us to add multiple source directory to a single module.

Structure of the sample module is as follow;


samples
-- handler-sample
--src
-- filebased-sample
--src
-- wsdl-sample
-- src
-- collection-handler-sample
-- src
-- custom-ui-sample1
-- src

So the corresponding pluging configuration is as follows



<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<id>add-source</id>
<phase>generate-sources</phase>
<goals>
<goal>add-source</goal>
</goals>
<configuration>
<sources>
<source>handler-sample/src</source>
<source>filebased-sample/src</source>
<source>wsdl-sample/src</source>
<source>collection-handler-sample/src</source>
<source>custom-ui-sample1/src</source>
</sources>
</configuration>
</execution>
</executions>
</plugin>

Eran ChinthakaGoogle thinks I am a "virus"

I was searching for a grocery store in my area, in google, and this is what the result was.

"We're sorry .. but your query looks similar to automated requests from a computer virus or spyware application. .... "

Seems some one had messed up automatic spyware detection. Can this be due to an error in IE?

Update : I just checked couple of more queries, now using firefox, and I got the same error. So it is some thing happening beyond my machine. Can be internal network or google is messing up.


Charitha KankanamgeHow to access HTTP headers from an Axis2 service implementation class

I have seen some users in axis user mailing list ask the question on how to access HTTP headers of the request SOAP message using the service implementation class.
It's easy and straightforward with messageContext class.
Lets see with an example.

1. Create a service implementation class as follows

import javax.servlet.http.HttpServletRequest;

import org.apache.axis2.context.MessageContext;

public class TestService {

public String MyOperation(String s){

MessageContext msgCtx = MessageContext.getCurrentMessageContext();
HttpServletRequest obj =(HttpServletRequest)msgCtx.getProperty("transport.http.servletRequest");
System.out.println("Acceptable Encoding type: "+obj.getHeader("Accept-Encoding"));
System.out.println("Acceptable character set: " +obj.getHeader("Accept-Charset"));
System.out.println("Acceptable Media Type: "+obj.getHeader("Accept"));
return s;

}
}

As you can see in the highlighted statements, first we need to get the current messageContext. Then from the messageContext, we can get the HTTPServletRequest object from which we can get whatever HTTP headers we want.

2. Write service descriptor(services.xml) for the above service class and deploy the service in Axis2 (If you are not familiar with Axis2 deployment, please read Axis2 user's guide )

3. Invoke the service in RESTful manner
http://:/services/TestService/MyOperation?s=hi

You will see the following in Axis2 run time console.

Acceptable Encoding type: gzip,deflate
Acceptable character set: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Acceptable Media Type: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Paul FremantleMashup IM

Yumani Ranaweera has written an excellent article on mashing up Instant Messaging (IM) with the WSO2 Mashup Server.

July 13, 2008

Deepal JayasingheAnnouncing: New Google C++ Testing Framework

The folks at Google have recently open-sourced their xUnit-based testing framework for C++ development. The framework is said by project developer Zhanyong Wan to have been in use internally at Google for years by thousands of their C++ developers.

Read the full story

Steve LoughranFloatplane

OK, you leave flickr alone for 10 minutes and enough comes up for you to log in, then you can open photos in separate windows and get the HTML fragments. At least it does work...firefox seems to give up on not a few sites. Here then, after taxi, bus, train, and widebody jet comes the last little transport of our outward journey.

Maldavian Air Taxi Maldavian Air Taxi Maldavian Air Taxi