Monday, November 17, 2014

Autohotkey script to generate a BIRT report from Eclipse Report Design perspective

An Autohotkey script to generate a BIRT report from Eclipse when using the Report Design perspective (Run → View Report → In Web Viewer/As Doc/As PDF etc). I use this because, frustratingly, there are no key bindings to generate a report and no accelerator keys under the View Report menu in Eclipse since Luna Release (4.4.0).

; In Eclipse, run BIRT report
#IfWinActive, Report Design - .*rptdesign - Eclipse
      SendInput, {ALT}rr{down}{down}{down}{down}{down}{down}{down}{enter}
      Sleep, 800

; Eclipse Only - but a torn out window panel of it.
#IfWinActive, ahk_class #32770
   ; View BIRT report
      SendInput, {F12}
      Sleep, 100
      SendInput, {ALT}rr{down}{down}{down}{down}{down}{down}{down}{enter}
      Sleep, 1000

clickOkToGenerateBirtReport() {
   ; If we can see the Parameter Selection Page, click OK.
   ; Look for it ten times, sleeping in between.
   Loop, 100
      if WinActive("PARAMETER SELECTION PAGE") {
         WinGet, IEControlList, ControlList, ahk_class SWT_Window0
         Loop, Parse,IEControlList, `n
            if (A_LoopField = "Internet Explorer_Server1") {
               MouseClick, left,  630,  580
               return true
      Sleep, 200
   MsgBox Sorry, couldn't find the button to click on.
   return false

Note that there are two conditions under which the trigger will execute. The first one is when I am in the main window of Eclipse: (IfWinActive, Report Design - .*rptdesign - Eclipse). The second is for when I am using Eclipse spread across two monitors and have torn off panels to sit on the other monitor and have run the command: IfWinActive, ahk_class #32770.

Tuesday, October 21, 2014

JUnit Parameterized and named tests

A relatively simple, working and complete example of JUnit Parameterized named tests.

Advantages of Parameterized in JUnit testing:

  • Re-use the same test methods over and over, just changing the parameters.
  • The method annotated by @Parameters could return a collection of data from a spreadsheet, database or hard-coded variable.
  • New naming mechanism lets you add a sensible label to each test to make it easy to identify the one that failed.
import static org.junit.Assert.assertEquals;

import java.util.Arrays;
import java.util.Collection;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

public final class MultiplicationParameterizedTest {

   private final int expectedResult;
   private final int firstNumber;
   private final int secondNumber;

   public MultiplicationParameterizedTest(final int theExpectedResult,
         final int theFirstNumber, final int theSecondNumber) {
      expectedResult = theExpectedResult;
      firstNumber = theFirstNumber;
      secondNumber = theSecondNumber;

   @Parameters(name = "Multiplication test {index}: {0}={1}x{2}")
   public static Collection<Integer[]> numbersToBeMultiplied() {
      return Arrays.asList(new Integer[][] {
            { 10, 5, 2 },
            { 48, 6, 8 },
            { 1452, 33, 44 },
            { 1044, 87, 12 },
            { 135, 3, 45 },

   public void sum() {
      final int actualResult = multiplyNumbers(firstNumber, secondNumber);
      assertEquals("Expected [" + firstNumber + " * " + secondNumber + " = "
            + expectedResult + "], not [" + actualResult + "]", expectedResult,

   public int multiplyNumbers(int a, int b) {
      int product = a * b;
      return product;


Here is what the tests look like when they pass in Eclipse. It shows you useful the naming mechanism is.

Now I change the last test case just to show what a failure looks like.

@Parameters(name = "Multiplication test {index}: {0}={1}x{2}")
public static Collection numbersToBeMultiplied() {
   return Arrays.asList(new Integer[][] {
         { 10, 5, 2 },
         { 48, 6, 8 },
         { 1452, 33, 44 },
         { 1044, 87, 12 },
         { 13, 3, 45 },

And in Eclipse, that failure looks like this:

You can still use setup and tear down methods:

public void setup() {
   System.out.println("Set up [" + firstNumber + " * " + secondNumber
         + " = " + expectedResult + "].");

public void tearDown() {
   System.out.println("Tear down [" + firstNumber + " * " + secondNumber
         + " = " + expectedResult + "].");

Note that names feature was introduced in JUnit 4.11, which is the latest stable build at the time of writing (Tuesday 21 October 2014).

What about data from a spreadsheet with lots of columns?

I am thinking of using this for tests that will have a lot of data that I want to write in a spreadsheet of ten columns or more. One problem I have with the above example is that I have a constructor accepting one parameter for each column of data. I need to encapsulate that in a class. This is because I am thinking of using this to drive Selenium tests, where I need lots of data to enter into every field in the UI. The example below shows part of how I will handle this.

import static org.junit.Assert.assertEquals;

import java.util.Arrays;
import java.util.Collection;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

public final class MultiplicationParameterizedTestWithObject {

   private final DataUnderTest data;

   public MultiplicationParameterizedTestWithObject(final DataUnderTest theData) {
      data = theData;

   @Parameters(name = "Multiplication test {index}: {0}")
   public static Collection<DataUnderTest[]> numbersToBeMultiplied() {
      return Arrays.asList(new DataUnderTest[][] {
            { new DataUnderTest("10 = 5 * 2", 10, 5, 2) },
            { new DataUnderTest("48 = 6 * 8", 48, 6, 8) },
            { new DataUnderTest("1,452 = 33 * 44", 1452, 33, 44) },
            { new DataUnderTest("1,044 = 87 * 12", 1044, 87, 12) },
            { new DataUnderTest("135 = 3 * 45", 135, 3, 45) },

   public void sum() {
      final int firstNumber = data.getFirstNumber();
      final int secondNumber = data.getSecondNumber();
      final int actualResult = multiplyNumbers(firstNumber, secondNumber);
      final int expectedResult = data.getExpectedResult();
      assertEquals("Expected [" + firstNumber + " * " + secondNumber + " = "
            + expectedResult + "], not [" + actualResult + "]", expectedResult,

   public int multiplyNumbers(int a, int b) {
      int product = a * b;
      return product;

   private static class DataUnderTest {
      private final String label;
      private final int expectedResult;
      private final int firstNumber;
      private final int secondNumber;

      public DataUnderTest(final String theLabel, final int theExpectedResult,
            final int theFirstNumber, final int theSecondNumber) {
         label = theLabel;
         expectedResult = theExpectedResult;
         firstNumber = theFirstNumber;
         secondNumber = theSecondNumber;

      public String toString() {
         return label;

      public String getLabel() {
         return label;

      public int getExpectedResult() {
         return expectedResult;

      public int getFirstNumber() {
         return firstNumber;

      public int getSecondNumber() {
         return secondNumber;


What I will need to do differently from above in my real test cases:

  • DataUnderTest will be much bigger - big enough that it will need to be in a separate java file.
  • The method annotated with @Parameters will read data from an Excel spreadsheet, perhaps using Java Excel API or The Apache POI Project.

What I like about this approach:

  • JUnit Parameterized seems to scale nicely because the method annotated with @Parameters can get data from anywhere.
  • By using DataUnderTest.toString(), I can still make good use of the same naming mechanism: @Parameters(name = "Multiplication test {index}: {0}"). By using this name declaration, the Eclipse results will look exactly the same as the first example because the {0} will be replaced by DataUnderTest.toString(). In my actual test, I plan to use the first column of data as the label.

What I don't like about this approach:

  • The fact that the method annotated with @Parameters has to return an Iterable of arrays, for example Collection<DataUnderTest[]>. It forces me to bind each element of the array to a constructor parameter, which is at least well defined, but doesn't seem so flexible, e.g. I cannot create an array that includes String and int without making it an Object array, in which case I would lose type information and would be forced to cast or somehow convert the values back to the types I want.

Thursday, October 16, 2014

Script to pump output into a text file and open it in an editor

I use this little script a lot when I am running some command in bash (on Cygwin in Windows) and I want to capture the output into a text file for display into my favourite editor (like UltreEdit).

For example, the tree command (or a recursive ls) will have a lot of output if you run it on a directory with a few children and a few levels of nested folders. Because the output is usually too large to view nicely in a single screen of text on a console, I will want to view it in a text editor where I can manipulate it: search through it, cut/copy/paste contents and generally modify it.

Before I wrote the script this post is about, I would do this in the following way:

tree > /tmp/temp.txt; u /tmp/temp.txt

Easy enough, but tiresome to type over and over.

In the above snippet, the u command refers to one of my most used custom scripts to open a text file in my favourite editor (with a few bells and whistles). It could be replaced with the following in this case:

tree > /tmp/temp.txt; Uedit32.exe `cygpath -w -a "/tmp/temp.txt"` &

Or more simply, a *nix tool like vim or less (though less won't let you edit it).

tree > /tmp/temp.txt; vim /tmp/temp.txt
tree > /tmp/temp.txt; less /tmp/temp.txt

Now, I will do this:

tree | intoTempFile

The intoTempFile script will write the output into a timestamped temp file (C:\cygwin\tmp\temp_20141016_220342.txt) and open it in UltraEdit for me. Here is the script.


# ------------------------------------------------------------------------------
# -- Into Temp File.
# ------------------------------------------------------------------------------
# Redirect piped input into a temp file and open it in UltraEdit.
#     Usage: someCommand | [altFilename.txt] [-t]

# ------------------------------------------------------------------------------
# -- Variables for this script.
# ------------------------------------------------------------------------------
# Shortcut for the name of this file - for docs.
commandName=`echo $0 | sed 's|.*/||'`

# Output file.
tempFile=/tmp/temp_$(date +"%Y%m%d_%H%M%S").txt

# To tee or not to tee?

# ------------------------------------------------------------------------------
# -- Common functions for this script.
# ------------------------------------------------------------------------------

# ===  FUNCTION  ===============================================================
#   DESCRIPTION:  Usage message.
#       RETURNS:  -
# ==============================================================================
function usage() {
   echo "Usage: someCommand | $commandName [altFilename.txt] [-t]"
   echo "-t means use tee. Without it, nothing is output to console."

# ===  FUNCTION  ===============================================================
#   DESCRIPTION:  Process all arguments to script. How to handle getopts args
#                 and operands at the same time:
#       RETURNS:  -
# ==============================================================================
function processArguments() {
   # Arg checks.. No more than two args
   if [ $# -gt 2 ] ; then
      echo "** Incorrect number of args specified **"
      exit 22

   # Loop to handle all parameters
   while true; do
      # Process single letter args.
      while getopts "$OPTIONS" option; do
         if ! processSingleLetterArguments "$option"; then exit 9; fi
      if ((OPTIND > $#)); then break; fi
      # Handle operand arg - file name
      if [ -z "$searchTerm" ] ; then

   # ---------------------------------------------------------------------------
   # Post argument processing - logic that must be applied once we know all args.
   # ---------------------------------------------------------------------------


# ===  FUNCTION  ===============================================================
#   DESCRIPTION:  Process single letter args via getopts
#       RETURNS:  -
# ==============================================================================
function processSingleLetterArguments() {
   case "$1" in
      t ) shouldUseTee=yes;;
      \?)   usage "*** Invalid option to $commandName: -$OPTARG ***"; exit 10;;
      :)    usage "*** $commandName - option -$OPTARG requires an argument. ***"; exit 11;;

# ===  FUNCTION  ===============================================================
#   DESCRIPTION:  Output a single line
#                 Will query tee variable.
#    PARAMETERS:  1 - line to output
#       RETURNS:  -
# ==============================================================================
function outputLine() {
   if [ "$shouldUseTee" == "yes" ]  ; then
      echo "$1" 2>&1 | tee -a "${tempFile}"
      echo "$1" >> "${tempFile}"

# ------------------------------------------------------------------------------
# -- Script logic.
# ------------------------------------------------------------------------------

# Process command line arguments.
processArguments "$@"

# Preserve indenting of source.

touch  "${tempFile}"

outputLine "Starting output."
outputLine "----"
outputLine " "

while read -r x ; do
   outputLine "$x"
done $tempFile

By default this will not show the output to the console. I can change that with the -t option as below - which will use tee to duplicate the output to console as well as a file. I don't usually use this though because it makes the script a lot slower.

tree | intoTempFile -t
Also, if I want to control the path to the text file being used, I can specify it like so.
tree | intoTempFile  /tmp/treeList.txt

Saturday, October 04, 2014

Add Eclipse Project to Local and Remote Git Repository

This is a brief tutorial on using Eclipse to check in projects to local and remote Git repositories.

  1. Install and configure EGit within Eclipse, if not already done.
  2. Create a project in Eclipse.
  3. Add project to local git repository.
  4. Create a project in GitHub (with the same name as our local project).
  5. Check in our project to the remote git repository.
  6. The two phases of commitment.
  7. Helpful Resources.

Prerequisites are as below.

  • You have an account already on GitHub. Free accounts can be used to create only public repositories i.e. everybody will be able to see your code. So if you need a private Git repository on GitHub, you have to pay.

To top of post.

Install and Configure EGit

Here you will install the EGit plugin within Eclipse. This gives Eclipse the ability to interact with Git repositories. If you have Eclipse Kepler (4.3), Eclipse Luna (4.4) or later, then this easily installed through the Eclipse Marketplace.

  1. Open Eclipse, go HelpEclipse Marketplace.
  2. In the search field, type EGit and press ENTER.
  3. The first result will be EGit - Git Team Provider 3.5.0 (latest version as of this writing).
  4. Click the Install button.
  5. Select Eclipse Git Team provider component.
  6. Select Next, accept license agreements, let it install, let Eclipse restart.

Before you can use EGit, you need to configure it.

Set up your identity. In Eclipse, go WindowPreferencesTeamGitConfiguration. Enter your name and email - Git will use this to identify who is making commits.

Set up location for local repositories. In Eclipse, go WindowPreferencesTeamGit and set a location for Default Repository Folder. I have used D:\Documents\Work\Git.

It is a good idea to create your Git repositories in a different directory than your Eclipse Workspace. The Git repository folders will hold lots of files related to Git setup, not just your project files. If you create your Git repository within your Eclipse Workspace, some operations in Eclipse would suffer a performance hit because they will have to scan all of these other files, not just project files.

You will need to create this directory first and it should be empty.

To top of post.

Create a project

In Eclipse, select FileNewProject → select whatever project type you want (I am using a Java Project). Give it a name and press Finish. The contents of the project aren't important - it is just a few bare files to show the next part of the process - adding it to a local repository and then a remote one.

Note that your project has been created in your Eclipse workspace. In my case, this was D:\Documents\Work\JavaTests\TestProject.

To top of post.

Add project to local git repository

In Package Explorer view, right click on the project you just created and select TeamShare Project. In the Share Project dialog, select Git and then click Next.

The next screen is Configure Git Repository.

I am not selecting Use or create repository in parent folder of project for reasons stated above - the repository folder will contain a lot of Git-specific files apart from my project files and I do not want Eclipse having to scan these folders.

The general idea is that you have one project per repository, so we make one repo for this project. Click Create.

Give the new repository the same name as your project and click Finish.

I am now back on the Configure Git Repository dialog.

I can see the following:

  • Repository: D:\Documents\Work\Git\TestProject\.git - all the files related to Git go here.
  • Working Directory: D:\Documents\Work\Git\TestProject - the parent folder for the Git directory and the (new, target) project directory.
  • Current Location: D:\Documents\Work\JavaTests\TestProject - where the project directory current is, but it will be moved to Target.
  • Target Location: D:\Documents\Work\Git\TestProject\TestProject - new location for the project files.

Note that the Working directory field (read only) gives the repository location (D:\Documents\Work\Git in my case). Also note that the Current Location and Target Location are different. Current Location shows where my files are right now - in my Eclipse workspace. Target Location shows where my files will be moved to - a folder underneath the Git repository Working directory.

This means that when you press Finish, Eclipse will move your files out of the Eclipse workspace and into the Git repository folder. Don't be surprised - this is normal. It takes some getting used to if you haven't used Git before. In the Git repository, you will also notice a new folder - .git, which contains all the files Git uses to manage the repository.

Click Finish.

Now take some time to notice what has changed. The icons in Package Explorer are different.

Note the duplication in names - I have a Git repository called TestProject containing a project called TestProject. The icons reflect Git state. See EGit/User Guide/State which has this image showing at a glance the different states.

Also note that your files have indeed been moved out of your Eclipse workspace and into your Git repository.

Open up the Git Staging view: WindowShow ViewOther → select GitGit Staging.

This view makes it very easy to commit changes.

Click and drag the files down to the Staged Changes box. Type a commit message in the Commit Message text area.

Click Commit.

You can also do this with the commit action. Control+shift+3 is the default keyboard shortcut, or right click on the project and select Teamcommit. However, I find the Git Staging view much easier to use.

To top of post.

Create a project in GitHub (with the same name as our local project)

Go to GitHub and log in.

On the actions menu, select New repository.

Give Repository name the same value as your Eclipse project's name, and whatever Description is appropriate. Note that my repository is Public, so that anyone will be able to see what I check into it. Click Create Repository.

This screen gives us very important information that we will use in the next step - the HTTP URL for interacting with this repository from EGit in Eclipse:

To top of post.

Check in our project to the remote git repository

Back in Eclipse, right click on your project in Package Explorer and select TeamRemotePush...

Most importantly, in the URI field, enter the URL we got from GitHub earlier. The Host and Repository path fields will auto-fill correctly. In the Authentication section, enter your GitHub username and password. I let Eclipse store these details by selecting Store in Secure Store. If this is the first time I have done it, Eclipse will ask me to set up a Master Password.

Click Next and you will see the Push to: ... dialog.

Click Add All Branches Spec. Under Source ref select master [branch]. Under Destination ref, select refs/heads/master.

Click Next and see the Push Confirmation dialog. Click Finish.

The project files will then be pushed to the GitHub remote repository. When it is complete, you will see a results dialog.

If you refresh your GitHub repository, you should now see the files committed remotely.

And underneath the main project folder are my files.

To top of post.

The two phases of commitment

We know have a system of two phase commits, so to speak. Work locally and commit changes to the local repository so you have a history and backups as needed.

When you commit, you have a choice of using Commit or Commit and Push. Just choosing Commit will commit your changes to the local repository. Selecting Commit and Push will commit them to your local repository and then push them to the remote repository - e.g. GitHub. The first time you use Commit and Push, it will ask you to set up the remote repository.

Press next and you will see the Push Branch master dialog wherein you tell Git what to do when pulling changes from the upstream repository i.e. bring changes from GitHub back into your local.

Hit Next and the rest of this is like any other commit, but you will have sent your changes to two repositories - local and GitHub.

To top of post.

Helpful Resources

To top of post.

Tuesday, September 23, 2014

The installer is unable to instantiate the file KEY_XE.reg

I got this error when installing Oracle Database Express Edition 11g 32 bit on my Windows 7 64-bit machine.

The installer is unable to instantiate the file C:\Users\<your user name>\AppData\Local\Temp\{60712028-B7B0-4EC3-9C28-663111EC954A}\KEY_XE.reg.  The file does not appear to exist.

I was using Oracle Database Express Edition 11g Release 2 for Windows x32 because at the time there was no 64 bit version for Windows 7.

The answer I found was in a response to an Oracle forum post: XE11: KEY_XE.reg cannot be loaded on WIN7 prof 64b and I have integrated that process into the instructions below.

  1. Unzip the downloaded installer somewhere.
    unzip /C/Users/<your user name>/Downloads/ -d /C/Users/<your user name>/Temp/OracleXe
    chmod -R 777 /C/Users/<your user name>/Temp/OracleXe
  2. Run the exe: C:\Users\<your user name>\Temp\OracleXe\DISK1\setup.exe.
  3. Click NEXT > click I accept.. > STOP!
  4. If you continued, you would see the error this post is about. Did you ignore these instructions and continue anyway? That's OK.
    1. Accept the error (click OK).
    2. Let the install finish.
    3. Un-install Oracle Express.
    4. Delete C:\oraclexe
    5. Start again.
  5. You should now be on the screen: Choose Destination location. DO NOT PRESS NEXT YET.
    1. Open Windows Explorer to C:\Users\<your user name>\AppData\Local\Temp and look for a folder that the install just created. It will have a name like this: {60712028-B7B0-4EC3-9C28-663111EC954A} and will be the same as what was reported in the error dialog.
    2. Inside that folder, find the file OracleMTSRecoveryService.reg and make a copy of it. Rename the copy to KEY_XE.reg.
    3. Now go back to the installer and press NEXT.
  6. Specify Database Passowords: someSecurePassword.
  7. Verify that the Current Installation settings are OK:
       Destination Folder: C:\oraclexe\
       Oracle Home: C:\oraclexe\app\oracle\product\11.2.0\server\
       Oracle Base:C:\oraclexe\
       Port for 'Oracle Database Listener': 1521
       Port for 'Oracle Services for Microsoft Transaction Server': 2030
       Port for 'Oracle HTTP Listener': 8080
  8. Install finished.

This worked for me on my previous install on a Windows 7 box.

This week I had to set myself up on a new Windows 7 64-bit install. I went through the steps I outlined above and the install seemed to work but I encountered another problem: Connected to an idle instance; trouble creating DB. The fix for this issue was to un-install the 32-bit Oracle Express and install Oracle Database Express Edition 11g Release 2 for Windows x64 - the 64-bit installer. I did not get the same error (The installer is unable to instantiate the file) when installing the 64-bit version.

Plugin execution not covered by lifecycle configuration

Eclipse is showing this error

Plugin execution not covered by lifecycle configuration: org.apache.maven.plugins:maven-jar-plugin:2.4:jar (execution: make-jar, phase: compile)

It shows the error against the <execution> line in one of our POMs, an extract of which is below.

<project xsi:schemaLocation="" xmlns="" xmlns:xsi="">
          <!-- Create shared ViewController JAR file which is used by MYPROJECT-API-V1 -->

I only notice this error in Ecliipse Luna - it was not showing in Eclipse Kepler. Further, I can still build OK from the command line and even Eclipse still builds the project.

The fix was surprisingly easy. I select the quick fix: Discover new m2e connectors. Nothing came up, but when I clicked on FINISH, I found that Eclipse was indeed updating the m2e Connectors. I let it happen, re-started Eclipse, and the error disappeared.

Monday, September 15, 2014

Java process that can be stopped via sockets

While writing the code for Flip a coin and get the same results 100 times in a row, one thing that troubled me was finding a nice way to stop the process. The easiest (but ugliest) thing I could do was press control+c on the console to force-close (kill) the app. This is terrible because it doesn't allow me to clean up anything. I couldn't create a closing report on results. Worse, as I worked towards the next step - writing results to a database - I had no way to close the database. If you kill an app, it prevents finally clauses or try-with-resources blocks from doing their jobs.

One easy way to stop a long running process is to listen for keyboard input. The code below will do this.

public void run() {
   boolean keepGoing = true;
   try (Scanner keyboard = new Scanner( {
      while (keepGoing) {
         System.out.println("Doing work in a loop. Enter STOP to stop.");
         if (keyboard.hasNextLine()) {
            final String next = keyboard.nextLine();
            System.out.println("You entered [" + next + "].");
            if (next.equals("STOP")) {
               keepGoing = false;
   System.out.println("Process stopped.");

The problem with this is that my process uses the console to output ongoing results. I write to a file as well, but I find it very useful to write to the console too (standard out) and I don't want to interrupt console reporting. It was really awkward listening for keyboard input from a console that I am busy writing to. It might say "Enter STOP" at the top of the console, but after a few hours, 100s of lines will be written out. So I needed another way.

The approach I decided upon was to use sockets. I based it on the Echo Client/Server from the Java Trail, Reading from and Writing to a Socket. It meant that I had to make my application multi-threaded. Here is quasi Sequence Diagram representing the interactions in this pattern.

My main class (RandomInARow - running in a thread) starts up the StopListener thread.

// Spin up different thread - socket to listen for STOP signal.
stopListener = new StopListener();

And the RandomInARow's run() method does the work, and with every iteration, it will first check if the stopListener thread has received the STOP signal. If it has, then the application should clean up and finish.

public void run() {
   // Now do our own work in this thread.
   int target = INCREMENT_BY;
   while (stopListener.isActive()) {
      target += INCREMENT_BY;
   System.out.println("See results in log file [" + logFile.toAbsolutePath()
         + "].\n");

StopListener is a thread that will open up a socket and listen to it. As soon as it reads anything, it sets the boolean active to false and exits.

public void run() {
   try (ServerSocket serverSocket = new ServerSocket(port);
         Socket clientSocket = serverSocket.accept();
         PrintWriter out =
               new PrintWriter(clientSocket.getOutputStream(), true);
         BufferedReader in =
               new BufferedReader(new InputStreamReader(
                     clientSocket.getInputStream()));) {
      String inputLine;
      while ((inputLine = in.readLine()) != null) {
         out.println("Stop signal received.");
         active = false;
   } catch (IOException e) {
      System.out.println("Exception caught when trying to listen on port ["
            + port + "] or listening for a connection");

It is doing a few other things of course.

  • Note the use of try-with-resources to open up several Autocloseable objects: a ServerSocket, Socket, a PrintWriter and a BufferedReader. From the Java Trail, The try-with-resources Statement: note that the close methods of resources are called in the opposite order of their creation.
  • ServerSocket and Socket are two objects that talk to each other across a network via a given port. A Socket is a client: it sends requests to a server and reads a response. A ServerSocket is a server: it listens for requests from a client and sends a response. Note that "across a network" can still mean a client and server sitting on the same machine - they will open a port and send messages via that port in the same way they would if they were on different machines. On a Windows machine, when running a program that attempts to send/receive data via sockets, your Firewall may show a pop-up asking if you give permission for the network communication to happen.
  • Although ServerSocket and Socket objects communicate to each other, they don't know how read/write strings, integers etc - that job goes to a PrintWriter (which does the writing) and a BufferedReader (which does the reading).
  • The while loop will wait for something to be read, acknowledge the signal, store a message and set a flag to outside code that processing should stop.
    1. The while loop will wait for something to be read (in.readLine()).
    2. When something is read, it is stored (with message.append(inputLine)).
    3. An acknowledgment is then written back to the client Socket (out.println("Stop signal received.")).
    4. The boolean called active is set to false.
    5. The loop is stopped via break.

StopSender is a thread that will open up a socket on the same port that StopListener listens to. It sends a message via that port - either a default message or one supplied by the user. It reads a response which it outputs to console, and then finishes.

public void sendStopSignal(final String message) {
   try (Socket socket = new Socket(HOST, StopListener.DEFAULT_PORT);
         PrintWriter out = new PrintWriter(socket.getOutputStream(), true);
         BufferedReader in =
               new BufferedReader(new InputStreamReader(
                     socket.getInputStream()));) {
      final String received = in.readLine();
   } catch (UnknownHostException e) {
      System.err.println("Don't know about host [" + HOST + "].");
   } catch (IOException e) {
      System.err.println("Couldn't get I/O for the connection to [" + HOST
            + "].");

Note that HOST is defined to be localhost. This means that StopSender must be run on the same machine that StopListener is being run on! I have defined this in a variable to make it just a bit easier if I want to expand this to run across machines in the future.

public static final String HOST = "localhost";

Would you like to try out this code? You can download the jar containing all the class and java files at It was compiled under JDK 7. Save the jar file somewhere, and in the console, run it with the command below. The program will output to the console and to a text file.

java -jar randomInARow.jar

You can control where the text file is written to by supplying a path argument as below (the path/folder must already exist).

java -jar randomInARow.jar "dir/to/write/log/to"

The program will run forever, but you can stop it by running StopSender in another console.

java -cp randomInARow.jar com.rmb.randomnodatabase.taskmanagement.StopSender

This uses the default STOP message. You can supply your own message by adding an argument to the commaned, as below.

java -cp randomInARow.jar com.rmb.randomnodatabase.taskmanagement.StopSender "Because I want this to STOP NOW!"

Here is an extract from the log created when I used the above command to stop the application.

New count [ 24] at [14 Sep 2014, 11:36:16.946 PM] after 1 seconds and 538 milliseconds.
New count [ 22] at [14 Sep 2014, 11:36:17.371 PM] after 1 seconds and 963 milliseconds.
New count [ 26] at [14 Sep 2014, 11:36:17.459 PM] after 2 seconds and 51 milliseconds.
Finished at [14 Sep 2014, 11:36:18.454 PM] with target [30]
User cancelled operation. It took 3 seconds and 46 milliseconds to get [26] results in a row - with target [30].
Reason for stopping: Because I want this to STOP NOW
How often we flipped the same result [   1] times in a row: 13737650.
How often we flipped the same result [   2] times in a row: 6866089.
How often we flipped the same result [   3] times in a row: 3430324.

If you just want to see the code without downloading the jar, you can see the three classes I talk about there in separate pastebin pages.

Monday, September 08, 2014

Flip a coin and get the same results 100 times in a row

This morning, I heard someone say on a podcast that it was possible to flip a coin and get the same result 1000 times - it would just take a while. Then I thought, "if I could flip a coin as fast as random.nextInt(2), how long would it take?"

The full code I wrote in a pastebin. See the results of one continous run, in another pastebin.

The core code is very simple. Keep flipping a coin (random.nextInt(2)) for as long as you get the same result over and over - returning the number of times you got the same result.

private int countContiguousOccurences(final Random random) {
   int count = 1;
   final int first = random.nextInt(2);
   while (random.nextInt(2) == first) {
   return count;

Next, record how many times you flipped and got the same result in a row. Store this in a map of (key = int, value = long). The key int is how many times you got the same result in a row. The value long is how many times you have flipped the coin and got the same result this many times in a row. So for example, if we flipped and got heads five times in a row, and this was the tenth time we did it, we would be storing (key = 5, value = 10).

private void rememberCountOfContiguousOccurences(final Random random,
      final Map<Integer, Long> counts) {
   Integer count = countContiguousOccurences(random);
   if (counts.containsKey(count)) {
      Long countOccurences = counts.get(count);
      counts.put(count, countOccurences);
   } else {
      counts.put(count, 1l);
      Date currentTime = new Date();
      message(String.format("New count [%3d] at [" //
         + FORMAT_TIMESTAMP.get().format(currentTime)
         + "] after %s.\n", count, reportTime(currentTime.getTime()
         - start)));

I control the code with a thread in such a way that we try for ten of the same results in a row, then twenty, thirty, forty etc, incrementing the target by ten each time. Whenever the target is hit, output the results like this:

It took 168 milliseconds to get [20] results in a row.
How often we flipped the same result [   1] times in a row: 154382.
How often we flipped the same result [   2] times in a row: 77054.
How often we flipped the same result [   3] times in a row: 38763.
How often we flipped the same result [   4] times in a row: 19426.
How often we flipped the same result [   5] times in a row: 9608.
How often we flipped the same result [   6] times in a row: 4883.
How often we flipped the same result [   7] times in a row: 2359.
How often we flipped the same result [   8] times in a row: 1202.
How often we flipped the same result [   9] times in a row: 587.
How often we flipped the same result [  10] times in a row: 316.
How often we flipped the same result [  11] times in a row: 150.
How often we flipped the same result [  12] times in a row: 70.
How often we flipped the same result [  13] times in a row: 41.
How often we flipped the same result [  14] times in a row: 22.
How often we flipped the same result [  15] times in a row: 7.
How often we flipped the same result [  16] times in a row: 9.
How often we flipped the same result [  17] times in a row: 1.
How often we flipped the same result [  18] times in a row: 1.
How often we flipped the same result [  20] times in a row: 1.

Note that I report at the end of each run how many times we achieved each count in a row. So in this run we got the same result twice in a row 77054 times, and we got the same result eighteen times in a row just once. In this run, we didn't hit nineteen times in a row at all.

Each time I get a new count, it is reported as it happened. In this way, I can see that I don't always new results in order. See below for a sample of the latest run.

Initial results show that getting 30 of the same result in a row is easy - it happens within a couple of minutes.. but getting to 40 is a jump to hours, and 50: days.

New count [ 30] at [10 Sep 2014, 10:02:55.254 PM] after 22 seconds and 711 milliseconds.
New count [ 29] at [10 Sep 2014, 10:04:20.712 PM] after 1 minutes, 48 seconds and 169 milliseconds.
New count [ 32] at [10 Sep 2014, 10:06:57.301 PM] after 4 minutes, 24 seconds and 758 milliseconds.
New count [ 33] at [10 Sep 2014, 10:08:40.588 PM] after 6 minutes, 8 seconds and 45 milliseconds.
New count [ 31] at [10 Sep 2014, 10:09:11.408 PM] after 6 minutes, 38 seconds and 865 milliseconds.
New count [ 34] at [10 Sep 2014, 10:12:58.718 PM] after 10 minutes, 26 seconds and 175 milliseconds.
New count [ 35] at [10 Sep 2014, 10:18:24.287 PM] after 15 minutes, 51 seconds and 744 milliseconds.
New count [ 37] at [10 Sep 2014, 10:47:36.084 PM] after 45 minutes, 3 seconds and 541 milliseconds.
New count [ 36] at [10 Sep 2014, 11:20:35.756 PM] after 1 hours, 18 minutes, 3 seconds and 213 milliseconds.
New count [ 38] at [11 Sep 2014, 01:30:12.367 AM] after 3 hours, 27 minutes, 39 seconds and 824 milliseconds.
New count [ 40] at [11 Sep 2014, 03:36:59.302 AM] after 5 hours, 34 minutes, 26 seconds and 759 milliseconds.
New count [ 39] at [11 Sep 2014, 04:18:59.431 AM] after 6 hours, 16 minutes, 26 seconds and 888 milliseconds.
New count [ 41] at [12 Sep 2014, 07:38:47.966 PM] after 1 days, 21 hours, 36 minutes, 15 seconds and 423 milliseconds.
New count [ 45] at [13 Sep 2014, 01:09:16.112 AM] after 2 days, 3 hours, 6 minutes, 43 seconds and 569 milliseconds.
New count [ 43] at [14 Sep 2014, 12:25:38.984 PM] after 3 days, 14 hours, 23 minutes, 6 seconds and 441 milliseconds.

Thursday, May 29, 2014

Regex to replace upper case with lower case in UltraEdit

Regex in UltraEdit to replace someStringWithCamelCase with "some string with camel case".

This is using regex to find upper case characters and replace them with lower case by finding ([A-Z]) and replacing with \L\1\E.

Thanks to this StackOverflow post: Convert a char to upper case using regular expressions (EditPad Pro).

Saturday, March 22, 2014

Listary, Directory Opus and AutoHotkey - a match made in Geek Heaven

Listary is a very good tool for finding files fast. It indexes all drives on your machine and integrates into Windows Explorer or Directory Opus (my explorer of choice) so that when you start typing any file name, it will straight away show you all matches, highlighting the ones in your current directory.

You can also set a hotkey to summon a Listary toolbar if you are not in a file explorer to do the same thing - search for files. What is so cool about this is that once you find the file you want, press the right arrow key and you have the context menu displayed, so you can choose to open the file, do an SVN update or whatever - without having to go to your explorer program. There are other options added too - like copying the path of the file/directory to clipboard.

The free and pro versions are the same download - and you can access many of the pro features while using it for free (with a not very intrusive nag dialog). However, the pro version is well and truly worth it: $20 for lifetime upgrades.

As I mentioned, Directory Opus is my file explorer tool of choice, and I will often use the Listary toolbar to find and go to a directory when Directory Opus is open but not in focus (i.e. not the top window). So I might be writing in UltraEdit and want to look at files in D:\Temp, so I summon the Listary toolbar, type Temp and press enter on the correct entry. By default, Listary will open that directory in Directory Opus, but if Directory Opus was opened but not in focus, it won't manage to bring Directory Opus to the front (because in Directory Opus that is two commands - one to open a directory and one to bring Directory Opus to top).

So, a small fix is needed in this situation - Autohotkey script containing those two commands. Create an Autohotkey script:

Run c:\Program Files\GPSoftware\Directory Opus\dopusrt.exe /cmd Go "%1%"
Run c:\Program Files\GPSoftware\Directory Opus\dopusrt.exe /cmd Go LASTACTIVELISTER

Then open Listary Options > General tab and under Default File Manager, set Path to be the path to your AHK script (D:\Documents\apps\AHK\openDirectoryInDirectoryOpus.ahk in my case) and Parameter to %1. See the below screenshot.

Also, turn on Fuzzy Matching - that allows you type out non-contiguous parts of the file/directory name.

Monday, January 20, 2014

Testing synchronous vs asynchronous Dojo 1.9

I have this in a.js:
require(["dojo/_base/xhr"], function(){

I have this in b.js:
define(["dojo/_base/xhr"], function(){
  return {};

And I run this (in index.html):
console.log("TESTING Part One");
require([ "a" ], function() {
console.log("TESTING Part Two");
require([ "b" ], function() {

Output 1 - Asynchronous: here is my output with data-dojo-config="async: true"

Output 2 - Synchronous: here is my output with data-dojo-config=""

  1. With output 1, why are the "TESTING Part .." strings being output first?
    1. Is it because the two requires in index.html are creating anonymous blocks of code that are being asynchronously?
    2. Is this sneaky way of spinning off threads?
  2. With output 1, why is "b" even being output at all?
    1. From here: "Module creation is lazy and asynchronous, and does not occur immediately when define is called. This means that factory is not executed, and any dependencies of the module will not be resolved, until some running code actually requires the module."
    2. Nothing is actually calling on "b", so why is it executed at all?
  3. With output 2, does this mean that synchronous config turns off this AMD feature: "Module creation is lazy and asynchronous"?