Apache2 deployment Guide

 

This deployment guide is for Ubuntu 12.04, ruby 1.9.3. It needs to follow following steps.

  • Install apache2 server.

              $ sudo apt-get install apache2

  • Install cURL development headers with SSL support:

              $ sudo apt-get install libcurl4-openssl-dev

  • Install Apache 2 development headers:

              $ sudo apt-get install apache2-threaded-dev

  • And finally, Apache PRU (APU) development headers:

              $ sudo apt-get install libapr1-dev

  • Install passenger gem

              $ gem install passenger

  • Connect passenger with apache using following commands

             $ passenger-install-apache2-module

  • After this step add following lines to apache2.conf

              LoadModule passenger_module /home/escenic/.rvm/gems/ruby-1.9.3-p484/gems

                /passenger-4.0.41/buildout/apache2  /mod_passenger.so

          <IfModule mod_passenger.c>

               PassengerRoot /home/escenic/.rvm/gems/ruby-1.9.3-p484/gems/passenger-4.0.41

               PassengerDefaultRuby /home/escenic/.rvm/gems/ruby-1.9.3-p484/wrappers/ruby

          </IfModule>

  • Configure the job-collector project directory on apache2 var/www folder

           <VirtualHost *:80>

               # ServerName [!! Your domain OR droplet’s IP]

               ServerName 192.168.33.10

              RailsEnv development

              RackEnv development

              DocumentRoot /var/www/job-collector/public

             <Directory /var/www/job-collector/public>

                    # This relaxes Apache security settings.

                    AllowOverride all

                   # MultiViews must be turned off.

                   Options -MultiViews

             </Directory>

         </VirtualHost>

 

  • Start the apache2 server.

    $ sudo service apache2 start

Invalid byte sequence issue for US ASCII

Open the fiel /etc/apache2/envvars. Where following lines are written.

## The locale used by some modules like mod_dav
export LANG=C
## Uncomment the following line to use the system default locale instead:
#. /etc/default/locale

Swapping the comments out on this one to use the /etc/default/locale (which contains LANG=”en_US.UTF-8″) resolved the issue for me without having to make a wrapper for my ruby.

4 way to run OSGi

I would like to present a tutorial in which I show 4 ways how to activate/start code inside your OSGi bundle. All those ways are part of OSGi 4.2 specifications. The goal of the tutorial is to explain in a short form some OSGi specifications chapters with samples. I do not like to make any deep comparison of activation ways, just overview with workable examples. All sources you can find here.

 
 

Contents: 

  1. Requirements
  2. Use Case details
  3. Bundle Activator
  4. Declarative Services
  5. Blueprint Services
  6. Web Application Bundle
  7. How to run examples
  8. References
  1.     private ServiceTracker serviceTracker;  
  2.     // BundleContext  
  3.     private BundleContext bc;  
  4.     // Registration of Echo service  
  5.     private ServiceRegistration registration;  
  6.   
  7.     // activation  
  8.     public void start(BundleContext context) throws Exception {  
  9.         bc = context;  
  10.         // init and start ServiceTracker to track PreferencesService  
  11.         serviceTracker = new ServiceTracker(context, PreferencesService.class.getName(), new Customizer());  
  12.         serviceTracker.open();  
  13.     }  
  14.   
  15.     // deactivation  
  16.     public void stop(BundleContext context) throws Exception {  
  17.         // stop ServiceTracker to track PreferencesService  
  18.         serviceTracker.close();  
  19.         serviceTracker = null;  
  20.     }  
  21.   
  22.     public String echo(String str) {  
  23.         return str;  
  24.     }  
  25.   
  26.     // customizer that handles tracked service registration/modification/unregistration events  
  27.     private class Customizer implements ServiceTrackerCustomizer {  
  28.         public Object addingService(ServiceReference reference) {  
  29.             System.out.println(“PreferencesService is linked”);  
  30.             // register Echo service  
  31.             Dictionary<String, String> props = new Hashtable<String, String>();  
  32.             props.put(ECHO_TYPE_PROP, “BundleActivator”);  
  33.             registration = bc.registerService(Echo.class.getName(), Activator.this, props);  
  34.   
  35.             return bc.getService(reference);  
  36.         }  
  37.   
  38.         public void modifiedService(ServiceReference reference, Object service) {  
  39.         }  
  40.   
  41.         public void removedService(ServiceReference reference, Object service) {  
  42.             // unregister Echo service  
  43.             registration.unregister();  
  44.             System.out.println(“PreferencesService is unlinked”);  
  45.         }  
  46.     }  
  47. }  
    Bundle MANIFEST.MF:
    view plainprint?
  48. Manifest-Version: 1.0  
  49. Bundle-Name: KnowHowLab Tips&Tricks: Bundle Activation – Activator  
  50. Bundle-Version: 1.0.0.SNAPSHOT  
  51. Bundle-ManifestVersion: 2  
  52. Bundle-Activator: org.knowhowlab.osgi.tips.activation.activator.Activa  
  53.  tor  
  54. Bundle-Description: KnowHowLab Tips and Tricks: Bundle Activation – Ac  
  55.  tivator  
  56. Import-Package: org.knowhowlab.osgi.tips.activation.core,org.osgi.fram  
  57.  ework;version=“1.5”,org.osgi.service.prefs;version=“1.1”,org.osgi.uti  
  58.  l.tracker;version=“1.4”  
  59. Bundle-SymbolicName: org.knowhowlab.osgi.tips.activation.activator  
  1.   
  2.     public String echo(String str) {  
  3.         return str;  
  4.     }  
  5.   
  6.     // Called to bind PreferencesService  
  7.     public void bindPreferencesService(PreferencesService preferencesService) {  
  8.         System.out.println(“PreferencesService is linked”);  
  9.         this.preferencesService = preferencesService;  
  10.     }  
  11.   
  12.     // Called to unbind PreferencesService  
  13.     public void unbindPreferencesService(PreferencesService preferencesService) {  
  14.         this.preferencesService = null;  
  15.         System.out.println(“PreferencesService is unlinked”);  
  16.     }  
  17. }  
    Here is Component Description:
    view plainprint?
  18. <?xml version=“1.0” encoding=“UTF-8”?>  
  19. <components xmlns:scr=“http://www.osgi.org/xmlns/scr/v1.0.0”>  
  20.     <!– Echo Component –>  
  21.     <scr:component enabled=“true” immediate=“true” name=“Echo”>  
  22.         <!–Component Class name–>  
  23.         <implementation class=“org.knowhowlab.osgi.tips.activation.ds.EchoComponent”/>  
  24.         <!– Echo Service description –>  
  25.         <service servicefactory=“false”>  
  26.             <provide interface=“org.knowhowlab.osgi.tips.activation.core.Echo”/>  
  27.         </service>  
  28.         <!– Service registration properties –>  
  29.         <property name=“echo_type” type=“String” value=“Declarative Services”/>  
  30.         <property name=“service.pid” value=“Echo”/>  
  31.         <!– PreferencesService dependency description –>  
  32.         <reference name=“preferencesService”   
  33.                    interface=“org.osgi.service.prefs.PreferencesService”   
  34.                    cardinality=“1..1”  
  35.                    policy=“static”   
  36.                    bind=“bindPreferencesService”   
  37.                    unbind=“unbindPreferencesService”/>  
  38.     </scr:component>  
  39. </components>  
    Bundle MANIFEST.MF:
    view plainprint?
  40. Manifest-Version: 1.0  
  41. Service-Component: OSGI-INF/serviceComponents.xml  
  42. Export-Package: org.knowhowlab.osgi.tips.activation.ds;uses:=”org.know  
  43.  howlab.osgi.tips.activation.core,org.osgi.service.prefs”  
  44. Bundle-Name: KnowHowLab Tips&Tricks: Bundle Activation – DS  
  45. Bundle-Version: 1.0.0.SNAPSHOT  
  46. Bundle-ManifestVersion: 2  
  47. Import-Package: org.knowhowlab.osgi.tips.activation.core,org.osgi.serv  
  48.  ice.prefs;version=“1.1”  
  49. Bundle-SymbolicName: org.knowhowlab.osgi.tips.activation.ds  
    There is Component Descrition annotations library that generate Component Description during OSGi bundle build and makes development of Service Components easier. Here is code with annotations:
    view plainprint?
  50. // Component description  
  51. @Component(name = “Echo”, immediate = true)  
  52. // Service description  
  53. @Service(value = Echo.class)  
  54. // Service properties  
  55. @Property(name = Echo.ECHO_TYPE_PROP, value = “Declarative Services”)  
  56. public class EchoComponent implements Echo {  
  57.     // Reference to PreferencesService  
  58.     @Reference(name = “preferencesService”, referenceInterface = PreferencesService.class,  
  59.             cardinality = ReferenceCardinality.MANDATORY_UNARY, policy = ReferencePolicy.STATIC)  
  60.     private PreferencesService preferencesService;  
  61.   
  62.     public String echo(String str) {  
  63.         return str;  
  64.     }  
  65.   
  66.     // Called to bind PreferencesService  
  67.     public void bindPreferencesService(PreferencesService preferencesService) {  
  68.         System.out.println(“PreferencesService is linked”);  
  69.         this.preferencesService = preferencesService;  
  70.     }  
  71.   
  72.     // Called to unbind PreferencesService  
  73.     public void unbindPreferencesService(PreferencesService preferencesService) {  
  74.         this.preferencesService = null;  
  75.         System.out.println(“PreferencesService is unlinked”);  
  76.     }  
  77. }  
  1.     private PreferencesService preferencesService;  
  2.   
  3.     public String echo(String str) {  
  4.         return str;  
  5.     }  
  6.   
  7.     // Called to bind PreferencesService  
  8.     public void bindPreferencesService(PreferencesService preferencesService, Map props) {  
  9.         this.preferencesService = preferencesService;  
  10.         System.out.println(“PreferencesService is linked”);  
  11.     }  
  12.   
  13.     // Called to unbind PreferencesService  
  14.     public void unbindPreferencesService(PreferencesService preferencesService, Map props) {  
  15.         this.preferencesService = null;  
  16.         System.out.println(“PreferencesService is unlinked”);  
  17.     }  
  18. }  
    Here is Bean Definition (OSGI-INF/blueprint/echo.xml):
    view plainprint?
  19. <?xml version=“1.0” encoding=“UTF-8” standalone=“no”?>  
  20. <blueprint xmlns=“http://www.osgi.org/xmlns/blueprint/v1.0.0”>  
  21.     <!– Bean definition –>  
  22.     <bean id=“echoservice” class=“org.knowhowlab.osgi.tips.activation.blueprint.EchoBean”/>  
  23.     <!– Service definition –>  
  24.     <service ref=“echoservice” interface=“org.knowhowlab.osgi.tips.activation.core.Echo”  
  25.              depends-on=“preferencesService”>  
  26.         <service-properties>  
  27.             <entry key=“ECHO_TYPE_PROP” value=“Blueprint”/>  
  28.         </service-properties>  
  29.     </service>  
  30.     <!– PreferencesService reference definition –>  
  31.     <reference id=“preferencesService” interface=“org.osgi.service.prefs.PreferencesService”  
  32.                availability=“mandatory”>  
  33.         <reference-listener bind-method=“bindPreferencesService” unbind-method=“unbindPreferencesService”>  
  34.             <ref component-id=“echoservice”/>  
  35.         </reference-listener>  
  36.     </reference>  
  37. </blueprint>  
    I found very nice project BlueprintAnnotation under Apache Aries project. It provides runtime annotations for Blueprint beans that replaces Component/Bean Definition files. Here is EchoBean with Blueprint Annotations:
    view plainprint?
  38. // Bean definition  
  39. @Bean(id = “echoservice”)  
  40. // Service definition  
  41. @Service(interfaces = Echo.class,  
  42.         serviceProperties = @ServiceProperty(key = Echo.ECHO_TYPE_PROP, value = “Blueprint-Annotations”))  
  43. @ReferenceListener  
  44. public class EchoBean implements Echo {  
  45.     // Reference definition   
  46.     @Reference(availability = “mandatory”, referenceListeners = @ReferenceListener(ref = “echoservice”))  
  47.     private PreferencesService preferencesService;  
  48.   
  49.     public String echo(String str) {  
  50.         return str;  
  51.     }  
  52.   
  53.     @Bind  
  54.     public void bindPreferencesService(PreferencesService preferencesService, Map props) {  
  55.         this.preferencesService = preferencesService;  
  56.         System.out.println(“PreferencesService is linked”);  
  57.     }  
  58.   
  59.     @Unbind  
  60.     public void unbindPreferencesService(PreferencesService preferencesService, Map props) {  
  61.         this.preferencesService = null;  
  62.         System.out.println(“PreferencesService is unlinked”);  
  63.     }  
  64. }  
  1.         bc = (BundleContext) sce.getServletContext().getAttribute(“osgi-bundlecontext”);  
  2.         serviceTracker = new ServiceTracker(bc, PreferencesService.class.getName(), new Customizer());  
  3.         serviceTracker.open();  
  4.     }  
  5.   
  6.     public void contextDestroyed(ServletContextEvent sce) {  
  7.         serviceTracker.close();  
  8.         serviceTracker = null;  
  9.     }  
  10.   
  11.     public String echo(String str) {  
  12.         return str;  
  13.     }  
  14.   
  15.     private class Customizer implements ServiceTrackerCustomizer {  
  16.   
  17.         public Object addingService(ServiceReference reference) {  
  18.             System.out.println(“PreferencesService is linked”);  
  19.   
  20.             Dictionary<String, String> props = new Hashtable<String, String>();  
  21.             props.put(ECHO_TYPE_PROP, “WAB”);  
  22.             registration = bc.registerService(Echo.class.getName(), ContextListener.this, props);  
  23.   
  24.             return bc.getService(reference);  
  25.         }  
  26.   
  27.         public void modifiedService(ServiceReference reference, Object service) {  
  28.         }  
  29.   
  30.         public void removedService(ServiceReference reference, Object service) {  
  31.             registration.unregister();  
  32.             System.out.println(“PreferencesService is unlinked”);  
  33.         }  
  34.     }  
  35. }  
    Here is web.xml:
    view plainprint?
  36. <web-app xmlns=“http://java.sun.com/xml/ns/javaee”  
  37.          xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”  
  38.          metadata-complete=“true”  
  39.          version=“2.5”>  
  40.     <listener>  
  41.         <listener-class>org.knowhowlab.osgi.tips.activation.wab.ContextListener</listener-class>  
  42.     </listener>  
  43.   
  44.     <!– welcome file mapping –>  
  45.     <welcome-file-list>  
  46.         <welcome-file>index.htm</welcome-file>  
  47.     </welcome-file-list>  
  48. </web-app>  
    Here is MANIFEST.MF:
    view plainprint?
  49. Manifest-Version: 1.0  
  50. Export-Package: org.knowhowlab.osgi.tips.activation.wab;uses:=”org.kno  
  51.  whowlab.osgi.tips.activation.core,org.osgi.util.tracker,org.osgi.fram  
  52.  ework,javax.servlet,org.osgi.service.prefs”;version=“1.0.0.SNAPSHOT”  
  53. Bundle-Classpath: WEB-INF/classes  
  54. Bundle-Name: KnowHowLab Tips&Tricks: Bundle Activation – WAB  
  55. Web-ContextPath: /  
  56. Bundle-Version: 1.0.0.SNAPSHOT  
  57. Bundle-ManifestVersion: 2  
  58. Import-Package: javax.servlet,org.knowhowlab.osgi.tips.activation.core  
  59.  ,org.osgi.framework;version=“1.5”,org.osgi.service.prefs;version=”1.1  
  60.  “,org.osgi.util.tracker;version=“1.4”  
  61. Bundle-SymbolicName: org.knowhowlab.osgi.tips.activation.wab  
  1. How to run examples
    1. Download samples from here
    2. Compile with Maven command
      view plainprint?
      1. mvn clean install  
      1. mvn -f run.xml -P activator  
      1. mvn -f run.xml -P ds  
      1. mvn -f run.xml -P blueprint  
      1. mvn -f run.xml -P blueprint-annotations  
        Note: It does not work for me (even Aries BlueprintAnnotation sample). I’ll try with the next version.
      1. mvn -f run.xml -P wab  
      1. mvn -f run.xml -P all  
        Framework started:
        view plainprint?
      2. id      State       Bundle  
      3. 0       ACTIVE      org.eclipse.osgi_3.6.0.v20100517  
      4. 1       ACTIVE      org.eclipse.osgi.services_3.2.100.v20100503  
      5. 2       ACTIVE      org.eclipse.equinox.common_3.6.0.v20100503  
      6. 3       ACTIVE      org.eclipse.equinox.preferences_3.3.0.v20100503  
      7. 4       ACTIVE      org.eclipse.equinox.cm_1.0.200.v20100520  
      8. 5       ACTIVE      org.eclipse.equinox.util_1.0.200.v20100503  
      9. 6       ACTIVE      org.eclipse.equinox.ds_1.2.0.v20100507  
      10. 7       ACTIVE      org.apache.geronimo.specs.geronimo-servlet_2.5_spec_1.2.0  
      11. 8       ACTIVE      org.apache.geronimo.specs.geronimo-jaspic_1.0_spec_1.1.0  
      12. 9       ACTIVE      org.eclipse.jetty.aggregate.jetty-all-server_7.2.0.v20101020  
      13. 10      ACTIVE      org.eclipse.jetty.osgi.boot_7.2.0.v20101020  
      14. 11      ACTIVE      slf4j.api_1.6.1  
      15.                     Fragments=12  
      16. 12      RESOLVED    slf4j.jdk14_1.6.1  
      17.                     Master=11  
      18. 13      ACTIVE      org.apache.aries.blueprint_0.2.0.incubating  
      19. 14      ACTIVE      org.knowhowlab.osgi.tips.activation.core_1.0.0.SNAPSHOT  
      20. 15      ACTIVE      org.knowhowlab.osgi.tips.activation.activator_1.0.0.SNAPSHOT  
      21. 16      ACTIVE      org.knowhowlab.osgi.tips.activation.ds_1.0.0.SNAPSHOT  
      22. 17      ACTIVE      org.knowhowlab.osgi.tips.activation.blueprint_1.0.0.SNAPSHOT  
      23. 18      ACTIVE      org.knowhowlab.osgi.tips.activation.wab_1.0.0.SNAPSHOT  
      24.               
        All Echo services registered:
        view plainprint?
      25. osgi> services (objectClass=org.knowhowlab.osgi.tips.activation.core.Echo)  
      26. {org.knowhowlab.osgi.tips.activation.core.Echo}={echo_type=BundleActivatorservice.id=41}  
      27.   Registered by bundle: org.knowhowlab.osgi.tips.activation.activator_1.0.0.SNAPSHOT [15]  
      28.   No bundles using service.  
      29. {org.knowhowlab.osgi.tips.activation.core.Echo}={service.pid=Echoecho_type=Declarative Services, component.name=Echocomponent.id=0service.id=42}  
      30.   Registered by bundle: org.knowhowlab.osgi.tips.activation.ds_1.0.0.SNAPSHOT [16]  
      31.   No bundles using service.  
      32. {org.knowhowlab.osgi.tips.activation.core.Echo}={osgi.service.blueprint.compname=echoserviceECHO_TYPE_PROP=Blueprintservice.id=46}  
      33.   Registered by bundle: org.knowhowlab.osgi.tips.activation.blueprint_1.0.0.SNAPSHOT [17]  
      34.   No bundles using service.  
      35. {org.knowhowlab.osgi.tips.activation.core.Echo}={echo_type=WABservice.id=49}  
      36.   Registered by bundle: org.knowhowlab.osgi.tips.activation.wab_1.0.0.SNAPSHOT [18]  
      37.   No bundles using service.  
      38.               
        Stop PersistenceService bundle, all Echo services are unregistered:
        view plainprint?
      39. osgi> stop 3  
      40. PreferencesService is unlinked  
      41. PreferencesService is unlinked  
      42. PreferencesService is unlinked  
      43. PreferencesService is unlinked  
      44.   
      45. osgi> services (objectClass=org.knowhowlab.osgi.tips.activation.core.Echo)  
      46. No registered services.  
      47.               
  2. References: 
    1. OSGi specifications
    2. OSGi API
    3. Bnd tool – tool to help you diagnose and create OSGi bundles
    4. maven-bundle-plugin – Maven plugin for Bnd tool
    5. maven-scr-plugin – Maven plugin to ease the development of OSGi component and services
    6. SCR annotations – How to use annotations for OSGi components
    7. Aries Blueprint – Blueprint “Hello World” tutorial
    8. Aries BlueprintAnnotations – BlueprintAnnotation tutorial
    9. Pax Runner – tool to provision OSGi bundles in all major open source OSGi framework implementations (Felix, Equinox, Knopflerfish, Concierge)

Your comments are welcome. 

Thank you. 

Dmytro

 
 

Pasted from <http://blog.knowhowlab.org/2010/10/osgi-tutorial-4-ways-to-activate-code.html>

Top 8 Java People

Here are the top 8 Java people, they’re created frameworks, products, tools or books that contributed to the Java community, and changed the way of coding Java.

P.S The order is based on my personal priority.

8. Tomcat & Ant Founder


James Duncan Davidson, while he was software engineer at Sun Microsystems (1997–2001), created Tomcat Java-based web server, still widely use in most of the Java web projects, and also Ant build tool, which uses XML to describe the build process and its dependencies, which is still the de facto standard for building Java-based Web applications.

Related Links

  1. James Duncan Davidson Twitter
  2. James Duncan Davidson Wiki
  3. James Duncan Davidson personal blog
  4. Apache Ant
  5. Apache Tomcat

7. Test Driven Development & JUnit Founder


Kent Beck, creator of the Extreme Programming and Test Driven Development software development methodologies. Furthermore, he and Erich Gamma created JUnit, a simple testing framework, which turn into the de facto standard for testing Java-based Web applications. The combine of JUnit and Test Driven Development makes a big changed on the way of coding Java, which causes many Java developers are not willing to follow it.

Related Links

  1. Kent Beck Twitter
  2. Kent Beck Wiki
  3. Kent Beck Blog
  4. JUnit Testing Framework
  5. Extreme Programming Wiki
  6. Test Driven Development Wiki

News & Interviews

  1. Kent Beck: “We thought we were just programming on an airplane”
  2. Interview with Kent Beck and Martin Fowler
  3. eXtreme Programming An interview with Kent Beck

Kent Beck Books

  1. Extreme Programming Explained: Embrace Change (2nd Edition)
  2. Refactoring: Improving the Design of Existing Code
  3. JUnit Pocket Guide

6. Java Collections Framework


Joshua Bloch, led the design and implementation of numerous Java platform features, including JDK 5.0 language enhancements and the award-winning Java Collections Framework. In June 2004 he left Sun and became Chief Java Architect at Google. Furthermore, he won the prestigious Jolt Award from Software Development Magazine for his book, “Effective Java”, which is arguably a must read Java’s book.

Related Links

  1. Joshua Bloch Twitter
  2. Joshua Bloch Wiki

News & Interviews

  1. Effective Java: An Interview with Joshua Bloch
  2. Rock Star Josh Bloch

Joshua Bloch Books

  1. Effective Java (2nd Edition)
  2. Java Concurrency in Practice

5. JBoss Founder


Marc Fleury, who founded JBoss in 2001, an open-source Java application server, arguably the de facto standard for deploying Java-based Web applications. Later he sold the JBoss to RedHat, and joined RedHat to continue support on the JBoss development. On 9 February 2007, he decided to leave Red Hat to pursue other personal interests, such as teaching, research in biology, music and his family.

Related Links

  1. Marc Fleury Wiki
  2. Marc Fleury Blog
  3. JBoss Application Server

News & Interviews

  1. Could Red Hat lose JBoss founder?
  2. JBoss founder Marc Fleury leaves Red Hat, now what?
  3. JBoss’s Marc Fleury on SOA, ESB and OSS
  4. Resurrecting Marc Fleury

4. Struts Founder


Craig Mcclanahan, creator of Struts, a popular open source MVC framework for building Java-based web applications, which is arguably that every Java developer know how to code Struts. With the huge success of Struts in early day, it’s widely implemented in every single of the old Java web application project.

Related Links

  1. Craig Mcclanahan Wiki
  2. Craig Mcclanahan Blog
  3. Apache Struts

News & Interviews

  1. Interview with Craig McClanahan
  2. Struts Or JSF?

3. Spring Founder


Rod Johnson, is the founder of the Spring Framework, an open source application framework for Java, Creator of Spring, CEO at SpringSource. Furthermore, Rod’s best-selling Expert One-on-One J2EE Design and Development (2002) was one of the most influential books ever published on J2EE.

Related Links

  1. Rod Johnson Twitter
  2. Rod Johnson Blog
  3. SpringSource
  4. Spring Framework Wiki

News & Interviews

  1. VMware.com : VMware to acquire SpringSource
  2. Rod Johnson : VMware to acquire SpringSource
  3. Interview with Rod Johnson – CEO – Interface21
  4. Q&A with Rod Johnson over Spring’s maintenance policy changes
  5. Expert One-on-One J2EE Design and Development: Interview with Rod Johnson

Rod Johnson Books

  1. Expert One-on-One J2EE Design and Development (Programmer to Programmer)
  2. Expert One-on-One J2EE Development without EJB

2. Hibernate Founder


Gavin King, is the founder of the Hibernate project, a popular object/relational persistence solution for Java, and the creator of Seam, an application framework for Java EE 5. Furthermore, he contributed heavily to the design of EJB 3.0 and JPA.

Related Links

  1. Gavin King Blog
  2. Hibernate Wiki
  3. Hibernate Framework
  4. JBoss seam

News & Interviews

  1. Tech Chat: Gavin King on Contexts and Dependency Injection, Weld, Java EE 6
  2. JPT : The Interview: Gavin King, Hibernate
  3. JavaFree : Interview with Gavin King, founder of Hibernate
  4. Seam in Depth with Gavin King

Gavin King Books

  1. Java Persistence with Hibernate
  2. Hibernate in Action (In Action series)

1. Father Of The Java Programming Language


James Gosling, generally credited as the inventor of the Java programming language in 1994. He created the original design of Java and implemented its original compiler and virtual machine. For this achievement he was elected to the United States National Academy of Engineering. On April 2, 2010, he left Sun Microsystems which had recently been acquired by the Oracle Corporation. Regarding why he left, Gosling wrote on his blog that “Just about anything I could say that would be accurate and honest would do more harm than good.”

Related Links

  1. James Gosling Blog
  2. James Gosling Wiki

News & Interviews

  1. Interview with Dennis Ritchie, Bjarne Stroustrup, and James Gosling
  2. Interview: James Gosling, ‘the Father of Java’
  3. Developer Interview: James Gosling

Note

Please feel free to share your personal opinion.

Windows Version History

 
 

Version

Marketing name

Editions

Release date

RTM build

NT 3.1

Windows NT 3.1

Workstation (named just Windows NT), Advanced Server

27 July 1993

528

NT 3.5

Windows NT 3.5

Workstation, Server

21 September 1994

807

NT 3.51

Windows NT 3.51

Workstation, Server

30 May 1995

1057

NT 4.0

Windows NT 4.0

Workstation, Server, Server Enterprise Edition, Terminal Server, Embedded

29 July 1996

1381

NT 5.0

Windows 2000

Professional, Server, Advanced Server, Datacenter Server, Advanced/Datacenter Server Limited Edition

17 February 2000

2195

NT 5.1

Windows XP

Home, Professional, 64-bit Edition (Itanium), Media Center (original, 2003, 2004 & 2005), Tablet PC (original and 2005), Starter, Embedded, Home N, Professional N

25 October 2001

2600

NT 5.1

Windows Fundamentals for Legacy PCs

N/A

8 July 2006

2600

NT 5.2

Windows XP

64-bit Edition Version 2003 (Itanium)[18]

28 March 2003

3790

NT 5.2

Windows Server 2003

Standard, Enterprise, Datacenter, Web, Storage, Small Business Server, Compute Cluster

24 April 2003

3790

NT 5.2

Windows XP

Professional x64 Edition

25 April 2005

3790

NT 5.2

Windows Server 2003 R2

Standard, Enterprise, Datacenter, Web, Storage, Small Business Server, Compute Cluster

6 December 2005

3790

NT 5.2

Windows Home Server

N/A

16 July 2007

3790

NT 6.0

Windows Vista

Starter, Home Basic, Home Premium, Business, Enterprise, Ultimate, Home Basic N, Business N

Business: 30 November 2006

Consumer: 30 January 2007

6000

6001(SP1)

6002(SP2)

NT 6.0

Windows Server 2008

Foundation, Standard, Enterprise, Datacenter, Web Server, HPC Server, Itanium-Based Systems[19]

27 February 2008

6001

6002(SP2)

NT 6.1[20]

Windows 7

Starter, Home Basic, Home Premium, Professional, Enterprise, Ultimate[21]

22 October 2009[22]

7600

7601(SP1)

NT 6.1[20]

Windows Server 2008 R2

Foundation, Standard, Enterprise, Datacenter, Web Server, HPC Server, Itanium-Based Systems

22 October 2009[23]

7600

7601(SP1)

TBA

Windows 8

TBA

TBA

TBA

TBA

Windows Server 8

TBA

TBA

TBA

Windows NT releases

 
 

Windows NT 3.1 to 3.51 incorporated the Program Man

Designing

At the core of the compute engine is a protocol that enables tasks to be submitted to the compute engine, the compute engine to run those tasks, and the results of those tasks to be returned to the client. This protocol is expressed in the interfaces that are supported by the compute engine. The remote communication for this protocol is illustrated in the following figure.

Each interface contains a single method. The compute engine’s remote interface, Compute, enables tasks to be submitted to the engine. The client interface, Task, defines how the compute engine executes a submitted task.

The compute.Compute interface defines the remotely accessible part, the compute engine itself. Here is the source code for the Compute interface:

package compute;

import java.rmi.Remote;
import java.rmi.RemoteException;

public interface Compute extends Remote {
<T> T executeTask(Task<T> t) throws RemoteException;
}

By extending the interface java.rmi.Remote, the Compute interface identifies itself as an interface whose methods can be invoked from another Java virtual machine. Any object that implements this interface can be a remote object.

As a member of a remote interface, the executeTask method is a remote method. Therefore, this method must be defined as being capable of throwing ajava.rmi.RemoteException. This exception is thrown by the RMI system from a remote method invocation to indicate that either a communication failure or a protocol error has occurred. A RemoteException is a checked exception, so any code invoking a remote method needs to handle this exception by either catching it or declaring it in itsthrows clause.

The second interface needed for the compute engine is the Task interface, which is the type of the parameter to the executeTask method in the Compute interface. Thecompute.Task interface defines the interface between the compute engine and the work that it needs to do, providing the way to start the work. Here is the source code for theTask interface:

package compute;

public interface Task<T> {
T execute();
}

The Task interface defines a single method, execute, which has no parameters and throws no exceptions. Because the interface does not extend Remote, the method in this interface doesn’t need to list java.rmi.RemoteException in its throws clause.

The Task interface has a type parameter, T, which represents the result type of the task’s computation. This interface’s execute method returns the result of the computation and thus its return type is T.

The Compute interface’s executeTask method, in turn, returns the result of the execution of the Task instance passed to it. Thus, the executeTask method has its own type parameter, T, that associates its own return type with the result type of the passed Task instance.

RMI uses the Java object serialization mechanism to transport objects by value between Java virtual machines. For an object to be considered serializable, its class must implement the java.io.Serializable marker interface. Therefore, classes that implement the Task interface must also implement Serializable, as must the classes of objects used for task results.

Different kinds of tasks can be run by a Compute object as long as they are implementations of the Task type. The classes that implement this interface can contain any data needed for the computation of the task and any other methods needed for the computation.

Here is how RMI makes this simple compute engine possible. Because RMI can assume that the Task objects are written in the Java programming language, implementations of the Task object that were previously unknown to the compute engine are downloaded by RMI into the compute engine’s Java virtual machine as needed. This capability enables clients of the compute engine to define new kinds of tasks to be run on the server machine without needing the code to be explicitly installed on that machine.

The compute engine, implemented by the ComputeEngine class, implements the Compute interface, enabling different tasks to be submitted to it by calls to its executeTaskmethod. These tasks are run using the task’s implementation of the execute method and the results, are returned to the remote client.

Writing an RMI Server

The compute engine server accepts tasks from clients, runs the tasks, and returns any results. The server code consists of an interface and a class. The interface defines the methods that can be invoked from the client. Essentially, the interface defines the client’s view of the remote object. The class provides the implementation.

 
 

Designing a Remote Interface

This section explains the Compute interface, which provides the connection between the client and the server. You will also learn about the RMI API, which supports this communication.

Implementing a Remote Interface

This section explores the class that implements the Compute interface, thereby implementing a remote object. This class also provides the rest of the code that makes up the server program, including a main method that creates an instance of the remote object, registers it with the RMI registry, and sets up a security manager.

Overview

RMI applications often comprise two separate programs, a server and a client. A typical server program creates some remote objects, makes references to these objects accessible, and waits for clients to invoke methods on these objects. A typical client program obtains a remote reference to one or more remote objects on a server and then invokes methods on them. RMI provides the mechanism by which the server and the client communicate and pass information back and forth. Such an application is sometimes referred to as a distributed object application.

 
 

Distributed object applications need to do the following:

  • Locate remote objects. Applications can use various mechanisms to obtain references to remote objects. For example, an application can register its remote objects with RMI’s simple naming facility, the RMI registry. Alternatively, an application can pass and return remote object references as part of other remote invocations.
  • Communicate with remote objects. Details of communication between remote objects are handled by RMI. To the programmer, remote communication looks similar to regular Java method invocations.
  • Load class definitions for objects that are passed around. Because RMI enables objects to be passed back and forth, it provides mechanisms for loading an object’s class definitions as well as for transmitting an object’s data.

 
 

The following illustration depicts an RMI distributed application that uses the RMI registry to obtain a reference to a remote object. The server calls the registry to associate (or bind) a name with a remote object. The client looks up the remote object by its name in the server’s registry and then invokes a method on it. The illustration also shows that the RMI system uses an existing web server to load class definitions, from server to client and from client to server, for objects when needed.

 
 

Advantages of Dynamic Code Loading

One of the central and unique features of RMI is its ability to download the definition of an object’s class if the class is not defined in the receiver’s Java virtual machine. All of the types and behavior of an object, previously available only in a single Java virtual machine, can be transmitted to another, possibly remote, Java virtual machine. RMI passes objects by their actual classes, so the behavior of the objects is not changed when they are sent to another Java virtual machine. This capability enables new types and behaviors to be introduced into a remote Java virtual machine, thus dynamically extending the behavior of an application. The compute engine example in this trail uses this capability to introduce new behavior to a distributed program.

Remote Interfaces, Objects, and Methods

Like any other Java application, a distributed application built by using Java RMI is made up of interfaces and classes. The interfaces declare methods. The classes implement the methods declared in the interfaces and, perhaps, declare additional methods as well. In a distributed application, some implementations might reside in some Java virtual machines but not others. Objects with methods that can be invoked across Java virtual machines are called remote objects.

An object becomes remote by implementing a remote interface, which has the following characteristics:

  • A remote interface extends the interface java.rmi.Remote.
  • Each method of the interface declares java.rmi.RemoteException in its throws clause, in addition to any application-specific exceptions.

RMI treats a remote object differently from a non-remote object when the object is passed from one Java virtual machine to another Java virtual machine. Rather than making a copy of the implementation object in the receiving Java virtual machine, RMI passes a remote stub for a remote object. The stub acts as the local representative, or proxy, for the remote object and basically is, to the client, the remote reference. The client invokes a method on the local stub, which is responsible for carrying out the method invocation on the remote object.

A stub for a remote object implements the same set of remote interfaces that the remote object implements. This property enables a stub to be cast to any of the interfaces that the remote object implements. However, only those methods defined in a remote interface are available to be called from the receiving Java virtual machine.

Creating Distributed Applications by Using RMI

Using RMI to develop a distributed application involves these general steps:

  1. Designing and implementing the components of your distributed application.
  2. Compiling sources.
  3. Making classes network accessible.
  4. Starting the application.

Designing and Implementing the Application Components

First, determine your application architecture, including which components are local objects and which components are remotely accessible. This step includes:

  • Defining the remote interfaces. A remote interface specifies the methods that can be invoked remotely by a client. Clients program to remote interfaces, not to the implementation classes of those interfaces. The design of such interfaces includes the determination of the types of objects that will be used as the parameters and return values for these methods. If any of these interfaces or classes do not yet exist, you need to define them as well.
  • Implementing the remote objects. Remote objects must implement one or more remote interfaces. The remote object class may include implementations of other interfaces and methods that are available only locally. If any local classes are to be used for parameters or return values of any of these methods, they must be implemented as well.
  • Implementing the clients. Clients that use remote objects can be implemented at any time after the remote interfaces are defined, including after the remote objects have been deployed.

Compiling Sources

As with any Java program, you use the javac compiler to compile the source files. The source files contain the declarations of the remote interfaces, their implementations, any other server classes, and the client classes.

Note: With versions prior to Java Platform, Standard Edition 5.0, an additional step was required to build stub classes, by using the rmic compiler. However, this step is no longer necessary.

 
 

Making Classes Network Accessible

In this step, you make certain class definitions network accessible, such as the definitions for the remote interfaces and their associated types, and the definitions for classes that need to be downloaded to the clients or servers. Classes definitions are typically made network accessible through a web server.

Starting the Application

Starting the application includes running the RMI remote object registry, the server, and the client.

The rest of this section walks through the steps used to create a compute engine.

Building a Generic Compute Engine

This trail focuses on a simple, yet powerful, distributed application called a compute engine. The compute engine is a remote object on the server that takes tasks from clients, runs the tasks, and returns any results. The tasks are run on the machine where the server is running. This type of distributed application can enable a number of client machines to make use of a particularly powerful machine or a machine that has specialized hardware.

The novel aspect of the compute engine is that the tasks it runs do not need to be defined when the compute engine is written or started. New kinds of tasks can be created at any time and then given to the compute engine to be run. The only requirement of a task is that its class implement a particular interface. The code needed to accomplish the task can be downloaded by the RMI system to the compute engine. Then, the compute engine runs the task, using the resources on the machine on which the compute engine is running.

The ability to perform arbitrary tasks is enabled by the dynamic nature of the Java platform, which is extended to the network by RMI. RMI dynamically loads the task code into the compute engine’s Java virtual machine and runs the task without prior knowledge of the class that implements the task. Such an application, which has the ability to download code dynamically, is often called a behavior-based application. Such applications usually require full agent-enabled infrastructures. With RMI, such applications are part of the basic mechanisms for distributed computing on the Java platform.

Connecting with DataSource

This section covers DataSource objects, which are the preferred means of getting a connection to a data source. In addition to their other advantages, which will be explained later, DataSource objects can provide connection pooling and distributed transactions. This functionality is essential for enterprise database computing. In particular, it is integral to Enterprise JavaBeans (EJB) technology.

 
 

This section shows you how to get a connection using the DataSource interface and how to use distributed transactions and connection pooling. Both of these involve very few code changes in your JDBC application.

 
 

The work performed to deploy the classes that make these operations possible, which a system administrator usually does with a tool (such as Apache Tomcat or Oracle WebLogic Server), varies with the type of DataSource object that is being deployed. As a result, most of this section is devoted to showing how a system administrator sets up the environment so that programmers can use a DataSource object to get connections.

 
 

The following topics are covered:

 
 

Using DataSource Objects to Get a Connection

In Establishing a Connection, you learned how to get a connection using the DriverManager class. This section shows you how to use a DataSource object to get a connection to your data source, which is the preferred way.

 
 

Objects instantiated by classes that implement the DataSource represent a particular DBMS or some other data source, such as a file. A DataSource object represents a particular DBMS or some other data source, such as a file. If a company uses more than one data source, it will deploy a separate DataSource object for each of them. TheDataSource interface is implemented by a driver vendor. It can be implemented in three different ways:

 
 

  • A basic DataSource implementation produces standard Connection objects that are not pooled or used in a distributed transaction.
  • A DataSource implementation that supports connection pooling produces Connection objects that participate in connection pooling, that is, connections that can be recycled.
  • A DataSource implementation that supports distributed transactions produces Connection objects that can be used in a distributed transaction, that is, a transaction that accesses two or more DBMS servers.

 
 

A JDBC driver should include at least a basic DataSource implementation. For example, the Java DB JDBC driver includes the implementationorg.apache.derby.jdbc.ClientDataSource and for MySQL, com.mysql.jdbc.jdbc2.optional.MysqlDataSource.

 
 

A DataSource class that supports distributed transactions typically also implements support for connection pooling. For example, a DataSource class provided by an EJB vendor almost always supports both connection pooling and distributed transactions.

 
 

Suppose that the owner of the thriving chain of The Coffee Break shops, from the previous examples, has decided to expand further by selling coffee over the Internet. With the large amount of online business expected, the owner will definitely need connection pooling. Opening and closing connections involves a great deal of overhead, and the owner anticipates that this online ordering system will necessitate a sizable number of queries and updates. With connection pooling, a pool of connections can be used over and over again, avoiding the expense of creating a new connection for every database access. In addition, the owner now has a second DBMS that contains data for the recently acquired coffee roasting company. This means that the owner will want to be able to write distributed transactions that use both the old DBMS server and the new one.

 
 

The chain owner has reconfigured the computer system to serve the new, larger customer base. The owner has purchased the most recent JDBC driver and an EJB application server that works with it to be able to use distributed transactions and get the increased performance that comes with connection pooling. Many JDBC drivers are available that are compatible with the recently purchased EJB server. The owner now has a three-tier architecture, with a new EJB application server and JDBC driver in the middle tier and the two DBMS servers as the third tier. Client computers making requests are the first tier.

 
 

Deploying Basic DataSource Objects

The system administrator needs to deploy DataSource objects so that The Coffee Break’s programming team can start using them. Deploying a DataSource object consists of three tasks:

 
 

  1. Creating an instance of the DataSource class
  2. Setting its properties
  3. Registering it with a naming service that uses the Java Naming and Directory Interface (JNDI) API

 
 

First, consider the most basic case, which is to use a basic implementation of the DataSource interface, that is, one that does not support connection pooling or distributed transactions. In this case there is only one DataSource object that needs to be deployed. A basic implementation of DataSource produces the same kind of connections that the DriverManager class produces.

 
 

Creating Instance of DataSource Class and Setting its Properties

Suppose a company that wants only a basic implementation of DataSource has bought a driver from the JDBC vendor DB Access, Inc. This driver includes the classcom.dbaccess.BasicDataSource that implements the DataSource interface. The following code excerpt creates an instance of the class BasicDataSource and sets its properties. After the instance of BasicDataSource is deployed, a programmer can call the method DataSource.getConnection to get a connection to the company’s database, CUSTOMER_ACCOUNTS. First, the system administrator creates the BasicDataSource object ds using the default constructor. The system administrator then sets three properties. Note that the following code is typically be executed by a deployment tool:


com.dbaccess.BasicDataSource ds =
new com.dbaccess.BasicDataSource();
ds.setServerName(“grinder”);
ds.setDatabaseName(“CUSTOMER_ACCOUNTS”);
ds.setDescription(
“Customer accounts database for billing);

 
 

The variable ds now represents the database CUSTOMER_ACCOUNTS installed on the server. Any connection produced by the BasicDataSource object ds will be a connection to the database CUSTOMER_ACCOUNTS.

 
 

Registering DataSource Object with Naming Service That Uses JNDI API

With the properties set, the system administrator can register the BasicDataSource object with a JNDI (Java Naming and Directory Interface) naming service. The particular naming service that is used is usually determined by a system property, which is not shown here. The following code excerpt registers the BasicDataSource object and binds it with the logical name jdbc/billingDB:


Context ctx = new InitialContext();
ctx.bind(“jdbc/billingDB”, ds);

This code uses the JNDI API. The first line creates an InitialContext object, which serves as the starting point for a name, similar to root directory in a file system. The second line associates, or binds, the BasicDataSource object ds to the logical name jdbc/billingDB. In the next code excerpt, you give the naming service this logical name, and it returns the BasicDataSource object. The logical name can be any string. In this case, the company decided to use the name billingDB as the logical name for the CUSTOMER_ACCOUNTS database.

In the previous example, jdbc is a subcontext under the initial context, just as a directory under the root directory is a subdirectory. The name jdbc/billingDB is like a path name, where the last item in the path is analogous to a file name. In this case, billingDB is the logical name that is given to the BasicDataSource object ds. The subcontextjdbc is reserved for logical names to be bound to DataSource objects, so jdbc will always be the first part of a logical name for a data source.

 
 

Using Deployed DataSource Object

After a basic DataSource implementation is deployed by a system administrator, it is ready for a programmer to use. This means that a programmer can give the logical data source name that was bound to an instance of a DataSource class, and the JNDI naming service will return an instance of that DataSource class. The method getConnection can then be called on that DataSource object to get a connection to the data source it represents. For example, a programmer might write the following two lines of code to get a DataSource object that produces a connection to the database CUSTOMER_ACCOUNTS.


Context ctx = new InitialContext();
DataSource ds =
(DataSource)ctx.lookup(“jdbc/billingDB”);

 
 

The first line of code gets an initial context as the starting point for retrieving a DataSource object. When you supply the logical name jdbc/billingDB to the methodlookup, the method returns the DataSource object that the system administrator bound to jdbc/billingDB at deployment time. Because the return value of the methodlookup is a Java Object, we must cast it to the more specific DataSource type before assigning it to the variable ds.

The variable ds is an instance of the class com.dbaccess.BasicDataSource that implements the DataSource interface. Calling the method ds.getConnection produces a connection to the CUSTOMER_ACCOUNTS database.


Connection con =
ds.getConnection(“fernanda”, “brewed”);

The getConnection method requires only the user name and password because the variable ds has the rest of the information necessary for establishing a connection with theCUSTOMER_ACCOUNTS database, such as the database name and location, in its properties.

Advantages of DataSource Objects

Because of its properties, a DataSource object is a better alternative than the DriverManager class for getting a connection. Programmers no longer have to hard code the driver name or JDBC URL in their applications, which makes them more portable. Also, DataSource properties make maintaining code much simpler. If there is a change, the system administrator can update data source properties and not be concerned about changing every application that makes a connection to the data source. For example, if the data source were moved to a different server, all the system administrator would have to do is set the serverName property to the new server name.

Aside from portability and ease of maintenance, using a DataSource object to get connections can offer other advantages. When the DataSource interface is implemented to work with a ConnectionPoolDataSource implementation, all of the connections produced by instances of that DataSource class will automatically be pooled connections. Similarly, when the DataSource implementation is implemented to work with an XADataSource class, all of the connections it produces will automatically be connections that can be used in a distributed transaction. The next section shows how to deploy these types of DataSource implementations.

Deploying Other DataSource Implementations

A system administrator or another person working in that capacity can deploy a DataSource object so that the connections it produces are pooled connections. To do this, he or she first deploys a ConnectionPoolDataSource object and then deploys a DataSource object implemented to work with it. The properties of theConnectionPoolDataSource object are set so that it represents the data source to which connections will be produced. After the ConnectionPoolDataSource object has been registered with a JNDI naming service, the DataSource object is deployed. Generally only two properties must be set for the DataSource object: description anddataSourceName. The value given to the dataSourceName property is the logical name identifying the ConnectionPoolDataSource object previously deployed, which is the object containing the properties needed to make the connection.

With the ConnectionPoolDataSource and DataSource objects deployed, you can call the method DataSource.getConnection on the DataSource object and get a pooled connection. This connection will be to the data source specified in the ConnectionPoolDataSource object’s properties.

The following example describes how a system administrator for The Coffee Break would deploy a DataSource object implemented to provide pooled connections. The system administrator would typically use a deployment tool, so the code fragments shown in this section are the code that a deployment tool would execute.

To get better performance, The Coffee Break company has bought a JDBC driver from DB Access, Inc. that includes the class com.dbaccess.ConnectionPoolDS, which implements the ConnectionPoolDataSource interface. The system administrator creates create an instance of this class, sets its properties, and registers it with a JNDI naming service. The Coffee Break has bought its DataSource class, com.applogic.PooledDataSource, from its EJB server vendor, Application Logic, Inc. The classcom.applogic.PooledDataSource implements connection pooling by using the underlying support provided by the ConnectionPoolDataSource classcom.dbaccess.ConnectionPoolDS.

The ConnectionPoolDataSource object must be deployed first. The following code creates an instance of com.dbaccess.ConnectionPoolDS and sets its properties:


com.dbaccess.ConnectionPoolDS cpds =
new com.dbaccess.ConnectionPoolDS();
cpds.setServerName(“creamer”);
cpds.setDatabaseName(“COFFEEBREAK”);
cpds.setPortNumber(9040);
cpds.setDescription(
“Connection pooling for COFFEEBREAK DBMS”);

After the ConnectionPoolDataSource object has been deployed, the system administrator deploys the DataSource object. The following code registers thecom.dbaccess.ConnectionPoolDS object cpds with a JNDI naming service. Note that the logical name being associated with the cpds variable has the subcontext pooladded under the subcontext jdbc, which is similar to adding a subdirectory to another subdirectory in a hierarchical file system. The logical name of any instance of the classcom.dbaccess.ConnectionPoolDS will always begin with jdbc/pool. Oracle recommends putting all ConnectionPoolDataSource objects under the subcontextjdbc/pool:


Context ctx = new InitialContext();
ctx.bind(“jdbc/pool/fastCoffeeDB”, cpds);

Next, the DataSource class that is implemented to interact with the cpds variable and other instances of the com.dbaccess.ConnectionPoolDS class is deployed. The following code creates an instance of this class and sets its properties. Note that only two properties are set for this instance of com.applogic.PooledDataSource. Thedescription property is set because it is always required. The other property that is set, dataSourceName, gives the logical JNDI name for cpds, which is an instance of thecom.dbaccess.ConnectionPoolDS class. In other words, cpds represents the ConnectionPoolDataSource object that will implement connection pooling for theDataSource object.

The following code, which would probably be executed by a deployment tool, creates a PooledDataSource object, sets its properties, and binds it to the logical namejdbc/fastCoffeeDB:


com.applogic.PooledDataSource ds =
new com.applogic.PooledDataSource();
ds.setDescription(
“produces pooled connections to COFFEEBREAK”);
ds.setDataSourceName(“jdbc/pool/fastCoffeeDB”);
Context ctx = new InitialContext();
ctx.bind(“jdbc/fastCoffeeDB”, ds);

At this point, a DataSource object is deployed from which an application can get pooled connections to the database COFFEEBREAK.

Getting and Using Pooled Connections

connection pool is a cache of database connection objects. The objects represent physical database connections that can be used by an application to connect to a database. At run time, the application requests a connection from the pool. If the pool contains a connection that can satisfy the request, it returns the connection to the application. If no connections are found, a new connection is created and returned to the application. The application uses the connection to perform some work on the database and then returns the object back to the pool. The connection is then available for the next connection request.

Connection pools promote the reuse of connection objects and reduce the number of times that connection objects are created. Connection pools significantly improve performance for database-intensive applications because creating connection objects is costly both in terms of time and resources.

Now that these DataSource and ConnectionPoolDataSource objects are deployed, a programmer can use the DataSource object to get a pooled connection. The code for getting a pooled connection is just like the code for getting a nonpooled connection, as shown in the following two lines:


ctx = new InitialContext();
ds = (DataSource)ctx.lookup(“jdbc/fastCoffeeDB”);

The variable ds represents a DataSource object that produces pooled connections to the database COFFEEBREAK. You need to retrieve this DataSource object only once because you can use it to produce as many pooled connections as needed. Calling the method getConnection on the ds variable automatically produces a pooled connection because the DataSource object that the ds variable represents was configured to produce pooled connections.

Connection pooling is generally transparent to the programmer. There are only two things you need to do when you are using pooled connections:

  1. Use a DataSource object rather than the DriverManager class to get a connection. In the following line of code, ds is a DataSource object implemented and deployed so that it will create pooled connections and username and password are variables that represent the credentials of the user that has access to the database:

    Connection con = ds.getConnection(username, password);

  2. Use a finally statement to close a pooled connection. The following finally block would appear after the try/catch block that applies to the code in which the pooled connection was used:

    try {
    Connection con = ds.getConnection(username, password);
    // … code to use the pooled connection con
    } catch (Exception ex {
    // … code to handle exceptions
    } finally {
    if (con != null) con.close();
    }

Otherwise, an application using a pooled connection is identical to an application using a regular connection. The only other thing an application programmer might notice when connection pooling is being done is that performance is better.

The following sample code gets a DataSource object that produces connections to the database COFFEEBREAK and uses it to update a price in the table COFFEES:


import java.sql.*;
import javax.sql.*;
import javax.ejb.*;
import javax.naming.*;

public class ConnectionPoolingBean
implements SessionBean {

// …

public void ejbCreate () throws CreateException {
ctx = new InitialContext();
ds = (DataSource)ctx.lookup(“jdbc/fastCoffeeDB”);
}

public void updatePrice(float price, String cofName, String username, String password)
throws SQLException{

Connection con;
PreparedStatement pstmt;
try {
con = ds.getConnection(username, password);
con.setAutoCommit(false);
pstmt = con.prepareStatement(“UPDATE COFFEES ” +
“SET PRICE = ? WHERE COF_NAME = ?”);
pstmt.setFloat(1, price);
pstmt.setString(2, cofName);
pstmt.executeUpdate();

con.commit();

pstmt.close();

} finally {
if (con != null) con.close();
}
}

private DataSource ds = null;
private Context ctx = null;
}

The connection in this code sample participates in connection pooling because the following are true:

  • An instance of a class implementing ConnectionPoolDataSource has been deployed.
  • An instance of a class implementing DataSource has been deployed, and the value set for its dataSourceName property is the logical name that was bound to the previously deployed ConnectionPoolDataSource object.

Note that although this code is very similar to code you have seen before, it is different in the following ways:

  • It imports the javax.sql, javax.ejb, and javax.naming packages in addition to java.sql.
    The DataSource and ConnectionPoolDataSource interfaces are in the javax.sql package, and the JNDI constructor InitialContext and methodContext.lookup are part of the javax.naming package. This particular example code is in the form of an EJB component that uses API from the javax.ejbpackage. The purpose of this example is to show that you use a pooled connection the same way you use a nonpooled connection, so you need not worry about understanding the EJB API.
  • It uses a DataSource object to get a connection instead of using the DriverManager facility.
  • It uses a finally block to ensure that the connection is closed.

Getting and using a pooled connection is similar to getting and using a regular connection. When someone acting as a system administrator has deployed aConnectionPoolDataSource object and a DataSource object properly, an application uses that DataSource object to get a pooled connection. An application should, however, use a finally block to close the pooled connection. For simplicity, the preceding example used a finally block but no catch block. If an exception is thrown by a method in the try block, it will be thrown by default, and the finally clause will be executed in any case.

Deploying Distributed Transactions

DataSource objects can be deployed to get connections that can be used in distributed transactions. As with connection pooling, two different class instances must be deployed: an XADataSource object and a DataSource object that is implemented to work with it.

Suppose that the EJB server that The Coffee Break entrepreneur bought includes the DataSource class com.applogic.TransactionalDS, which works with anXADataSource class such as com.dbaccess.XATransactionalDS. The fact that it works with any XADataSource class makes the EJB server portable across JDBC drivers. When the DataSource and XADataSource objects are deployed, the connections produced will be able to participate in distributed transactions. In this case, the classcom.applogic.TransactionalDS is implemented so that the connections produced are also pooled connections, which will usually be the case for DataSource classes provided as part of an EJB server implementation.

The XADataSource object must be deployed first. The following code creates an instance of com.dbaccess.XATransactionalDS and sets its properties:


com.dbaccess.XATransactionalDS xads =
new com.dbaccess.XATransactionalDS();
xads.setServerName(“creamer”);
xads.setDatabaseName(“COFFEEBREAK”);
xads.setPortNumber(9040);
xads.setDescription(
“Distributed transactions for COFFEEBREAK DBMS”);

The following code registers the com.dbaccess.XATransactionalDS object xads with a JNDI naming service. Note that the logical name being associated with xads has the subcontext xa added under jdbc. Oracle recommends that the logical name of any instance of the class com.dbaccess.XATransactionalDS always begin withjdbc/xa.


Context ctx = new InitialContext();
ctx.bind(“jdbc/xa/distCoffeeDB”, xads);

Next, the DataSource object that is implemented to interact with xads and other XADataSource objects is deployed. Note that the DataSource class,com.applogic.TransactionalDS, can work with an XADataSource class from any JDBC driver vendor. Deploying the DataSource object involves creating an instance of the com.applogic.TransactionalDS class and setting its properties. The dataSourceName property is set to jdbc/xa/distCoffeeDB, the logical name associated with com.dbaccess.XATransactionalDS. This is the XADataSource class that implements the distributed transaction capability for the DataSource class. The following code deploys an instance of the DataSource class:


com.applogic.TransactionalDS ds =
new com.applogic.TransactionalDS();
ds.setDescription(
“Produces distributed transaction
connections to COFFEEBREAK”);
ds.setDataSourceName(“jdbc/xa/distCoffeeDB”);
Context ctx = new InitialContext();
ctx.bind(“jdbc/distCoffeeDB”, ds);

Now that instances of the classes com.applogic.TransactionalDS and com.dbaccess.XATransactionalDS have been deployed, an application can call the methodgetConnection on instances of the TransactionalDS class to get a connection to the COFFEEBREAK database that can be used in distributed transactions.

Using Connections for Distributed Transactions

To get a connection that can be used for distributed transactions, must use a DataSource object that has been properly implemented and deployed, as shown in the sectionDeploying Distributed Transactions. With such a DataSource object, call the method getConnection on it. After you have the connection, use it just as you would use any other connection. Because jdbc/distCoffeesDB has been associated with an XADataSource object in a JNDI naming service, the following code produces a Connectionobject that can be used in distributed transactions:


Context ctx = new InitialContext();
DataSource ds =
(DataSource)ctx.lookup(“jdbc/distCoffeesDB”);
Connection con = ds.getConnection();

There are some minor but important restrictions on how this connection is used while it is part of a distributed transaction. A transaction manager controls when a distributed transaction begins and when it is committed or rolled back; therefore, application code should never call the methods Connection.commit or Connection.rollback. An application should likewise never call Connection.setAutoCommit(true), which enables the auto-commit mode, because that would also interfere with the transaction manager’s control of the transaction boundaries. This explains why a new connection that is created in the scope of a distributed transaction has its auto-commit mode disabled by default. Note that these restrictions apply only when a connection is participating in a distributed transaction; there are no restrictions while the connection is not part of a distributed transaction.

For the following example, suppose that an order of coffee has been shipped, which triggers updates to two tables that reside on different DBMS servers. The first table is a new INVENTORY table, and the second is the COFFEES table. Because these tables are on different DBMS servers, a transaction that involves both of them will be a distributed transaction. The code in the following example, which obtains a connection, updates the COFFEES table, and closes the connection, is the second part of a distributed transaction.

Note that the code does not explicitly commit or roll back the updates because the scope of the distributed transaction is being controlled by the middle tier server’s underlying system infrastructure. Also, assuming that the connection used for the distributed transaction is a pooled connection, the application uses a finally block to close the connection. This guarantees that a valid connection will be closed even if an exception is thrown, thereby ensuring that the connection is returned to the connection pool to be recycled.

The following code sample illustrates an enterprise Bean, which is a class that implements the methods that can be called by a client computer. The purpose of this example is to demonstrate that application code for a distributed transaction is no different from other code except that it does not call the Connection methods commit, rollback, orsetAutoCommit(true). Therefore, you do not need to worry about understanding the EJB API that is used.


import java.sql.*;
import javax.sql.*;
import javax.ejb.*;
import javax.naming.*;

public class DistributedTransactionBean
implements SessionBean {

// …

public void ejbCreate () throws CreateException {
ctx = new InitialContext();
ds = (DataSource)ctx.lookup(“jdbc/distCoffeesDB”);
}

public void updateTotal(int incr, String cofName, String username, String password)
throws SQLException {
Connection con;
PreparedStatement pstmt;
try {
con = ds.getConnection(username, password);
pstmt = con.prepareStatement(“UPDATE COFFEES ” +
“SET TOTAL = TOTAL + ? WHERE COF_NAME = ?”);
pstmt.setInt(1, incr);
pstmt.setString(2, cofName);
pstmt.executeUpdate();
stmt.close();

} finally {
if (con != null) con.close();
}
}
private DataSource ds = null;
private Context ctx = null;
}