As a software developer I'm constantly looking for tools and techniques to make the development process easier. I used to use a strict command-line environment and XEmacs for my Java Development until I was introduced to Eclipse which is a very nice, extensible IDE for Java and other languages (though it does Java best). The fact that I can refactor code (rename a class for example), constantly build in the background to check for errors, handle version control and project dependencies all in one place was a God-send. There was still one glaring flaw, however. Ant builds, while flexible, can be a pain to setup, even with an IDE, and because there's no standard for how to setup builds (because it's so flexible) the setup process can tend to be a bit frustrating. This is where Maven comes in. Maven is a Java tool who's mission is to standardize building a project and managing its dependencies. When creating a new project you create a simple XML document called a POM (Project Object Model) which describes the project as well as what its dependencies are.Maven defines a standard directory structure for creating projects. Because a standard is defined (though it can be overridden) there's no reading through complicated build.xml files to figure out what's going on or what targets you need to run in order to get the desired effect. This means that new developers to a project can get up to speed much faster. This also greatly simplifies the maintenance that needs to be done on the build setup of a project.With Maven the days of storing a project's dependencies in a child lib/ directory are over. When you build a project Maven will automatically download all the necessary dependencies and install them in your local repository. The local repository is for all Maven projects to use so, for example, if you use JUnit for testing purposes Maven would download the appropriate version (whichever you define), store it and make it available for use with all projects. This means you don't have to store your binary dependencies in version control. Maven handles it all for you.Maven doesn't stop there though. It can also handle generating documentation, building a website for your project and deploying your project to a shared repository for others to use and more. If that's not enough you can also integrate specialized Ant tasks into your Maven builds which will make the transition from Ant to Maven much more simple.Having said all this wonderful stuff about Maven there is a reasonable learning curve, though it's not nearly as bad as Ant's. For assistance here are some helpful links to get started with: The Official Apache Maven Site Maven in 5 Minutes - A great place to start! Maven Getting Started Guide - Once you've gone through the 5 Minute guide above this is a nice, more in-depth tutorial. Better Builds with Maven - Free online book that is a great introduction and also goes into quite a bit of depth. Proximity - For those that would like to host their own Maven repositories and Proxy external repositories this a very easy setup tool to accomplish just that.Happy coding!
Ever since I discovered Flickr a few years ago I've wanted to develop an application to synchronize the data between it and whatever desktop application I was using. At the time I was using Photoshop Elements on Windows and was able to "decode" the data model used in the Access database that stores all of it's data. Using that knowledge I created a library in Java to read the data into Java objects. On top of that I was able to throw together some simple code to upload images and add metadata in Flickr using the Flickrj library. It was inefficient but got the job done... eventually. Since it worked well enough I pretty much abandoned further development in favor of other projects.
Fast forward a year or two and I started to desire moving to the Mac platform and iPhoto or Aperture. I wanted to be able to adapt my existing code to export to one of the new apps and then export from there to Flickr. Unfortunately the Java support on OS X does not necessarily intend interaction with native applications which makes this kind of process difficult. After asking how this process might be accomplished on some Apple development forums I eventually gave up. For the time being I settled for exporting meta data to the images, adding them to Aperture, and painstakingly reorganizing them into sets. Not a fun process. I then added the popular Flickr Export plugin to send my photos to Flickr. Unfortunately Flickr Export is only intended to export (no syncing) and if I update meta data in Aperture it won't be updated in Flickr which is a bug as far as I'm concerned. Because I am the way I am I've always felt uncomfortable with this process. To top it all off the images that I've already stored don't have the Flickr IDs stored in the images so all that information was lost in Aperture (though I still had the mappings stores on my Windows box).
It seems I've found a way to kill two birds with one stone. The first issue I wanted to solve was to get Flickr information into Aperture so Flickr Export wouldn't upload photos that were already there. I knew I could do this pretty easily with AppleScript but as I don't know AppleScript and the syntax, while intended to be more readable is incredibly cryptic, I wanted to avoid it at all costs. Plus I didn't feel like learning a whole new language to write what would essentially be a use-once, throw away script. Ruby to the rescue! There are two Ruby libraries which can interact with OS X applications that I evaluated: Ruby OSA and appscript.
While Ruby OSA has a much nicer Ruby-like syntax and is able to generate API documentation for OS X applications you're unable (as far as I can tell) to access objects in collections by name. More specifically I wanted to be able to get an image from my Aperture library by name. I could write a quick method which would iterate the whole set of images but with about 5000 images in my library it would be terribly inefficient. I eventually settled on the appscript library which can do this and do it efficiently. Armed with this knowledge I'm now able to write code which will take the Flickr IDs associated with my old photo library and add them to the images in Aperture. Additionally using the code for these libraries as an example (or the Objective C version of appscript) I should be able to do similar operations using a combination of native code and JNI calls.I'm really excited about these prospects and hope to apply them to future development to make my Aperture to Flickr workflow more efficient and reliable. I'll post more about this effort as I make progress.
I've tried to use Maven during my Java development but have found it rather frustrating for a few reasons.
The documentation is a bit sparse - Most of what you need is represented but setting up your own settings, deploying artifacts a repository and other more advanced subjects are under represented. Deploying artifacts is painful - This mostly goes along with the first point. The syntax, once figured out is far from obvious and much to verbose for my taste. That's one of the drawbacks of extreme flexibility... extreme complexity.
Because I tend to be a perfectionist when it comes to code I write at home I had taken a break from coding Java for a while (except at work of course).
Recently I've had some really good ideas pop into my head as well as how to implement those ideas so I wanted to get started again with Maven. I was able to utilize the the Q4E project which is a Maven plugin for Eclipse which I've found more stable and consistent than the original M2Eclipse plugin (though I haven't tried it in a while so it may have improved). This combined with my already POM-enabled projects in source control made for a pretty easy return to Java development. Unfortunately there was some frustration just around the corner.
When my new application was getting bulky enough to start needing some logging I decided to start using log4j which is commonly used in Java development. All I needed to do was add it to my project's POM as a dependency and Maven should have taken care of downloading it and adding it to my CLASSPATH but unfortunately that wasn't the case. Maven wasn't able to figure out how to get 3 of log4j's dependencies. Usually in a case like this it gives good suggestions on where to download them. Unfortunately the links that were given kept redirecting me somewhere on the Java site that wasn't what I was after. After struggling with the issue for a few hours I gave up and started using the Java SDK's (inferior, IMHO) logging mechanism but not being able to use log4j continued to nag at me.
This morning I decided to give it another try and this time the first thing I did was a google search which brought me to this blog entry. While installing a separate piece of software, Artifactory for a Maven Repository Proxy, this was something I had done using Proximity in the past so I was pretty familiar with what to expect. As I already had a server machine setup and ready to run Tomcat I was able to install Artifactory pretty quickly. Once installed I followed the instructions in the blog post and in the Artifactory documentation for setting up Maven to look at the repository proxy first and 2 of the 3 missing dependencies were downloaded no problem. I had to do an explicit search for jms-1.1.jar to get the last dependency but I followed similar instructions for installing it into my new repository. Long story short I'm now able to use log4j in my Java coding. I can also easily deploy the artifacts I create to my own repository with ease.
Now I need to stop messing around with all this stuff and continue my coding.
In my last post I talked a bit about my desire to give recreating my blog using AWS, and in particular using a serverless setup (API Gateway -> Lambda, etc). One of the things i failed to mention is that I’ve already been working on this off an don for a while and already gotten deep into what’s needed to get it done. My general intention for these posts is a sort of development journal, documenting what I’m doing, the issues that I ran into, ways that I ended up fixing those problems, and general thought process I went through to get things done.
Since I’ve already started working on this, I’d like to share what I’ve gone through so far, to get more or less up to date, and then start doing proper journal-like entries. I’ve already started to take notes for some of my future posts, but for the time being, let’s get started on what I’ve done so far.
While I’ve worked with AWS services on and off for years, the number of services that I’ve been exposed to have been minimal; generally S3, DynamoDB, and a smattering of others, none of which I would claim to have any sort of serious expertise in (especially given how quickly the services are built upon and improved). My current team at Amazon spent the last year building up a service using mainly the serverless model that I’m going for with my blog platform using Cloud Formation (essentially configuration to define AWS infrastructure). I’ve had some opportunities to make changes to the CloudFormation configuration, but I didn’t find updating the YAML to be very friendly or fun.
When the CDK’s GA was announced in July, I was excited by the prospect of putting a user-friendly layer on top of Cloud Formation and started to experiment with it a bit to understand how the APIs work in general and to setup some infrastructure as well.
If you found the idea of writing code to create infrastructure, with all the bells and whistles you expect from an IDE experience intriguing, then a great place to start would be the CDK Workshop that the CDK team put together. It will walk you through the prerequisites for which ever language you choose (though I expect to be doing all of the implementation in Java), as well as walk you through setting up your first application. This makes for a good jumping off point for playing around with creating and updating new resources. This experimenting has also helped me to understand the different AWS services that I’ll be using at a much more fundamental level. Being able to use auto-complete to make suggestions is awesome.
Starting hear will allow you to get up to speed on things in preparation for the next post.
This whole project is intended to be a thorough deep-dive on the entire process from beginning to end. I’ll be sharing as much as I can as I go. I may go on some tangents here or there if I find that I keep doing things the hard way. I basically want to automate as much of the process as possible. I’ll try and keep any tangents I go on as separate posts so you can know ahead of time what you want to read and what you want to avoid, though I hope that all the posts will be helpful in some way.
In my next post, I’ll go through some example code for defining an API in API Gateway, and why doing so with out-of-the-box CDK code won’t scale.