Spring Boot & Amazon Web Services (EC2, RDS & S3)

This post will take you through a step by step guide to building and deploying a simple Java application to the AWS cloud platform. The application will use a few well known AWS services which I'll describe along the way. There is quite a bit of material to cover in this post so the overview of the AWS services will be light. For those interested in finding out more I'll link to the appropriate section of the AWS documentation. Amazon have done a fine job documenting their platform so I'd encourage you to have a read if time permits.      

Prerequisites 

In order to get the sample application up and running you'll need access to AWS. If you don't already have access you can register for a free account which includes access to a bunch of great services and some pretty generous allowances. I'd encourage you to get an account set up now before going any further.

What will the sample application look like? 

The application we're going to build is very simple customer management app and will consist of a Spring Boot web tier and an AnularJS front end. We'll deploy the application to AWS and make use of the following services.
  • EC2 - Amazons Elastic Cloud Compute provides on demand virtual server instances that can be quickly provisioned with the operating system and software stack of your choice. We'll be using Amazons own Linux machine image to deploy our application. 
  • Relational Database Service - Amazons database as a service allows developers to provision Amazon managed database instances in the cloud.  A number of  common database platforms are supported but we'll be using a MySQL instance.
  • S3 Storage - Amazons Simple Storage Service provides simple key value data storage which we'll be using to store image files. 
We're going to build a simple CRUD style customer management app to create, view and delete customer details. Below is a high level overview of each of the screens and how they interact with other components.
  • Create customer - An Angular managed view will capture and post customer data to a Spring Boot managed endpoint. When a customer is added the endpoint will save the customer data to a MySQL database instance on RDS. The customer image will be saved to S3 storage which will generate a unique key and a public URL to the image. The key and public URL will be saved in the database as part of the customer data. 
Create Customer View
  • View customer - An Angular managed view will issue a GET request to an endpoint for a specific customer. The endpoint will retrieve customer data from the MySQL database instance on RDS and return it to the client. The response data will include a publicly accessible URL which will be used to reference the customer image directly from S3 storage.
View Customer View
  • View all customers - An Angular managed view will issue a GET request for all customers to a Spring Boot managed endpoint. Customers will be displayed in a simple table and users will have the ability to view or delete customer rows. The endpoint will retrieve all customer data from the MySQL database instance on RDS and return it to the client. Images will be referenced from S3 in the same way as the View Customer screen. 
View All Customers View

Part 1 - Building the application  

The first part of this post will focus on building the demo application. In the second part we'll look at configuring the various services on AWS, running the application locally and then deploying it in the cloud.

Source Code  

The full source code for this tutorial is available on github at https://github.com/briansjavablog/spring-boot-aws. You may find it useful to pull the code locally so that you can experiment with it as you work through the tutorial.

Application Structure.



In the sections that follow we'll look at some of the most important components in detail. The focus of this post isn't Spring Boot so I wont describe every class in detail, as I've covered quite a bit this already in a separate post. We'll focus more on AWS integration and making our app cloud ready.  

Domain Model

The domain model for the demo app is very simple and consist of just 3 entities - a CustomerCustomerAddress and CustomerImage. The Customer entity is defined below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
@Entity(name="app_customer")
public class Customer{

    public Customer(){}
 
    public Customer(String firstName, String lastName, Date dateOfBirth, CustomerImage customerImage, Address address) {
       super();
       this.firstName = firstName;
       this.lastName = lastName;
       this.dateOfBirth = dateOfBirth;
       this.customerImage = customerImage;
       this.address = address;
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(nullable = false, length = 30)
    private String firstName;
 
    @Setter
    @Getter
    @Column(nullable = false, length = 30)
    private String lastName;
 
    @Setter 
    @Getter
    @Column(nullable = false)
    private Date dateOfBirth;
 
    @Setter
    @Getter
    @OneToOne(cascade = {CascadeType.ALL})
    private CustomerImage customerImage;
 
    @Setter
    @Getter
    @OneToOne(cascade = {CascadeType.ALL})
    private Address address;
}

Address is defined as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Entity(name="app_address")
public class Address{

    public Address(){}
 
    public Address(String street, String town, String county, String postCode) {
       this.street = street;
       this.town = town;
       this.county = county;
       this.postcode = postCode;
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(name = "street", nullable = false, length=40)
    private String street;
 
    @Setter
    @Getter
    @Column(name = "town", nullable = false, length=40)
    private String town;
 
    @Setter 
    @Getter
    @Column(name = "county", nullable = false, length=40)
    private String county;

    @Setter
    @Getter
    @Column(name = "postcode", nullable = false, length=40)
    private String postcode;
}

And finally CustomerImage is defined as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
@Entity(name="app_customer_image")
public class CustomerImage {

    public CustomerImage(){}
 
    public CustomerImage(String key, String url) {
       this.key = key;
       this.url =url;  
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(name = "s3_key", nullable = false, length=200)
    private String key;
 
    @Setter
    @Getter
    @Column(name = "url", nullable = false, length=1000)
    private String url;
 
}


Customer Controller

The CustomerController exposes endpoints for creating, retrieving and deleting customers and is called from an Angular front end that we'll create later.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
@RestController
public class CustomerController {

 @Autowired
 private CustomerRepository customerRepository;
 
 @Autowired
 private FileArchiveService fileArchiveService; 
  
 
        @RequestMapping(value = "/customers", method = RequestMethod.POST)
        public @ResponseBody Customer createCustomer(            
                @RequestParam(value="firstName", required=true) String firstName,
                @RequestParam(value="lastName", required=true) String lastName,
                @RequestParam(value="dateOfBirth", required=true) @DateTimeFormat(pattern="yyyy-MM-dd") Date dateOfBirth,
                @RequestParam(value="street", required=true) String street,
                @RequestParam(value="town", required=true) String town,
                @RequestParam(value="county", required=true) String county,
                @RequestParam(value="postcode", required=true) String postcode,
                @RequestParam(value="image", required=true) MultipartFile image) throws Exception {
    
             CustomerImage customerImage = fileArchiveService.saveFileToS3(image);         
             Customer customer = new Customer(firstName, lastName, dateOfBirth, customerImage, 
                                              new Address(street, town, county, postcode));
     
             customerRepository.save(customer);
             return customer;            
    }

This code snippet above does a few different things
  • Injects a CustomerRepository for saving and retrieving customer entities and a FileArchiveService for saving and retrieving customer images in S3 storage.
  • Takes posted form data including an image file and maps it to method parameters. 
  • Uses the FileArchiveService service to save the uploaded file to S3 storage. The returned CustomerImage object contains a key and public URL returned from S3.
  • Creates a Customer entity and saves it to the database. Note that the CustomerImage is saved as part of Customer so that the customer entity has a reference to the image stored on S3.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.GET)
public Customer getCustomer(@PathVariable("customerId") Long customerId) {
  
    /* validate customer Id parameter */
    if (null==customerId) {
       throw new InvalidCustomerRequestException();
    }
  
    Customer customer = customerRepository.findOne(customerId);
  
    if(null==customer){
       throw new CustomerNotFoundException();
    }
  
    return customer;
}

The method above provides an endpoint that takes a customer Id via a HTTP GET, retrieves the customer from the database and returns a JSON representation to the client.

1
2
3
4
5
@RequestMapping(value = "/customers", method = RequestMethod.GET)
public List<Customer> getCustomers() {
  
    return (List<Customer>) customerRepository.findAll();
}

The method above provides an endpoint for retrieving all customers via a HTTP GET.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.DELETE)
public void removeCustomer(@PathVariable("customerId") Long customerId, HttpServletResponse httpResponse) {

    if(customerRepository.exists(customerId)){
        Customer customer = customerRepository.findOne(customerId);
        fileArchiveService.deleteImageFromS3(customer.getCustomerImage());
        customerRepository.delete(customer); 
    }
  
    httpResponse.setStatus(HttpStatus.NO_CONTENT.value());
}

The method above exposes an endpoint for deleting customers using a HTTP DELETE. The CustomerImage associated with the Customer is used to call the FileArchiveService to remove the customer image from S3 storage. The Customer is then removed from the database and a HTTP 204 is returned to the client.

File Archive Service

As mentioned above, we're going to save uploaded images to S3 storage. Thankfully AWS provides an SDK that makes it easy to integrate with S3, so all we need to do is write a simple Service that uses that SDK to save and retrieve files.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
@Service
public class FileArchiveService {

    @Autowired
    private AmazonS3Client s3Client;

    private static final String S3_BUCKET_NAME = "brians-java-blog-aws-demo";


    /**
     * Save image to S3 and return CustomerImage containing key and public URL
     * 
     * @param multipartFile
     * @return
     * @throws IOException
     */
    public CustomerImage saveFileToS3(MultipartFile multipartFile) throws FileArchiveServiceException {

        try{
            File fileToUpload = convertFromMultiPart(multipartFile);
            String key = Instant.now().getEpochSecond() + "_" + fileToUpload.getName();

            /* save file */
            s3Client.putObject(new PutObjectRequest(S3_BUCKET_NAME, key, fileToUpload));

            /* get signed URL (valid for one year) */
            GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(S3_BUCKET_NAME, key);
            generatePresignedUrlRequest.setMethod(HttpMethod.GET);
            generatePresignedUrlRequest.setExpiration(DateTime.now().plusYears(1).toDate());

            URL signedUrl = s3Client.generatePresignedUrl(generatePresignedUrlRequest); 

            return new CustomerImage(key, signedUrl.toString());
        }
        catch(Exception ex){   
            throw new FileArchiveServiceException("An error occurred saving file to S3", ex);
        }  
    }
  • Line 5 - AmazonS3Client is provided by the AWS SDK and allows us to read and write to S3. This component gets the credentials necessary to connect to S3 from aws-config.xml which we'll define later.
  • Line 7 - The name of the S3 bucket that the application will read from and write to. You can think of a bucket as a storage container into which you can save resources. We'll look at how to define an S3 bucket later in the post. 
  • Lines 20 & 21 - The MultiPartFile uploaded from the client is converted to a File and a key is generated using the file name and time stamp. The combination of file name and time stamp is important so that multiple files can be uploaded with the same name.
  • Line 24 - The S3 client saves the file to the specified bucket using the generated key. 
  • Lines 27 to 31 - Using the bucket name and key to uniquely identify this resource,  a pre signed public facing URL is generated that can be later used to retrieve the image. The expiration is set to one year from today to tell S3 to make the resource available using this public URL for no more than one year.  
  • Line 33 - The generated key and public facing URL are wrapped in a CustomerImage and returned to the controller. CustomerImage is saved to the database as part of the Customer persist and is the link between the Customer stored in the database and the customers image file on S3. When a client issues a GET request for a specific customer the public facing URL to the customer image is returned. This allows the client application to reference the image directly from S3. 
1    
2
3
4
5
6
7
8
/**
 * Delete image from S3 using specified key
 * 
 * @param customerImage
 */
public void deleteImageFromS3(CustomerImage customerImage){
    s3Client.deleteObject(new DeleteObjectRequest(S3_BUCKET_NAME, customerImage.getKey())); 
}

The method above uses the key from CustomerImage to delete the specific resource from the brians-java-blog-aws-demo bucket on S3. This is the key that was used to save the image to S3 in the saveFileToS3 method described above.

Java Resource Configuration for AWS

The AwsResourceConfig class handles configuration required for integration with S3 storage and the MySQL instance running on RDS. The contents of this class are explained in detail below.

1
2
3
4
5
6
7
8
@Configuration
@ImportResource("classpath:/aws-config.xml")
@EnableRdsInstance(databaseName = "${database-name:}", 
                   dbInstanceIdentifier = "${db-instance-identifier:}", 
                   password = "${rdsPassword:}")
public class AwsResourceConfig {

}
  • @Configuration indicates that this class contains configuration and should be processed as part of component scanning.  
  • @ImportResources tells Spring to load the XML configuration defined in aws-config.xml. We'll cover the contents of this file later. 
  • @EnableRdsInstance is provided by Spring Cloud AWS as a convenient way of configuring an RDS instance. The databaseName, dbInstanceIdentifier and password are defined when setting up the RDS instance in the AWS console. We'll look at RDS set up later.


XML Resource Configuration for AWS

In order to access protected resources using Amazons SDK an access key and a secret key must be supplied. Spring Cloud for AWS provides an XML namespace for configuring both values so that they are available to the SDK at runtime.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aws-context="http://www.springframework.org/schema/cloud/aws/context"
       xmlns:jdbc="http://www.springframework.org/schema/cloud/aws/jdbc"
       xsi:schemaLocation="http://www.springframework.org/schema/beans 
                           http://www.springframework.org/schema/beans/spring-beans-4.1.xsd
                           http://www.springframework.org/schema/cloud/aws/context
                           http://www.springframework.org/schema/cloud/aws/context/spring-cloud-aws-context-1.0.xsd
                           http://www.springframework.org/schema/cloud/aws/jdbc             
                           http://www.springframework.org/schema/cloud/aws/jdbc/spring-cloud-aws-jdbc-1.0.xsd">

  <aws-context:context-credentials>
     <aws-context:simple-credentials access-key="${accessKey:}" secret-key="${secretKey:}"/>
  </aws-context:context-credentials> 
  
  <aws-context:context-resource-loader/>

</beans>
  • Line 14 sets the access key and secret key required by the SDK. It's important to note that these values should not be set directly in your configuration or properties files and should be passed to the application on start up (via environment or system variables). The secret key as the name suggests is very sensitive and if compromised will provide a user with access to all AWS services on your account. Make sure this value is not checked into source control, especially if your code is in a public repository. It's common for applications to trawl public repositories looking for keys that are subsequently used to compromise AWS accounts.
  • The context-resource-loader on line 17 is required to access S3 storage. You'll remember that we injected an instance of AmazonS3Client into the FileArchiveService earlier. The context-resource-loader ensures that an instance of AmazonS3Client is available with the credentials supplied in context-credentials.     


Front End - AngularJS

Now that the core server side components are in place it's time to look at some of the client side code. I'm not going to cover it in detail as the focus of this post is integrating with AWS, not the ins and outs of AngularJS. The AngularJS logic is wrapped up in app.js as follows.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
(function () {
    var springBootAws = angular.module('SpringBootAwsDemo', ['ngRoute', 'angularUtils.directives.dirPagination']);

    springBootAws.directive('active', function ($location) {
        return {
            link: function (scope, element) {
                function makeActiveIfMatchesCurrentPath() {
                    if ($location.path().indexOf(element.find('a').attr('href').substr(1)) > -1) {
                        element.addClass('active');
                    } else {
                        element.removeClass('active');
                    }
                }

                scope.$on('$routeChangeSuccess', function () {
                    makeActiveIfMatchesCurrentPath();
                });
            }
        };
    });
    
    springBootAws.directive('fileModel', [ '$parse', function($parse) {
     return {
      restrict : 'A',
      link : function(scope, element, attrs) {
       var model = $parse(attrs.fileModel);
       var modelSetter = model.assign;

       element.bind('change', function() {
        scope.$apply(function() {
         modelSetter(scope, element[0].files[0]);
        });
       });
      }
     };
    } ]);
    
    springBootAws.controller('CreateCustomerCtrl', function ($scope, $location, $http) {
        var self = this;
        
        self.add = function () {            
         var customerModel = self.model;         
         var savedCustomer;
         
         var formData = new FormData();
         formData.append('firstName', customerModel.firstName);
         formData.append('lastName', customerModel.lastName);
         formData.append('dateOfBirth', customerModel.dateOfBirth.getFullYear()  + '-' +  (customerModel.dateOfBirth.getMonth() + 1)  + '-' + customerModel.dateOfBirth.getDay());
         formData.append('image', customerModel.image);
         formData.append('street', customerModel.address.street);
         formData.append('town', customerModel.address.town);
         formData.append('county', customerModel.address.county);
         formData.append('postcode', customerModel.address.postcode);
          
         $scope.saving=true;
         $http.post('/spring-boot-aws/customers', formData, { 
             transformRequest : angular.identity,
       headers : {
        'Content-Type' : undefined
       }
      }).success(function(savedCustomer) {
       $scope.saving=false;
       $location.path("/view-customer/" + savedCustomer.id);       
      }).error(function(data) {
       $scope.saving=false; 
      });
        };
    });
    
    springBootAws.controller('ViewCustomerCtrl', function ($scope, $http, $routeParams) {
        
     var customerId = $routeParams.customerId;             
     $scope.currentPage = 1;
     $scope.pageSize = 10;
     
     $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers/' + customerId).then(function onSuccess(response) {
         $scope.customer = response.data;
         $scope.dataLoading = false;
        }, function onError(response) {
         $scope.customer = response.statusText;
         $scope.dataLoading = false;
        });
    });
    
    springBootAws.controller('ViewAllCustomersCtrl', function ($scope, $http) {
     
     var self = this;
     $scope.customers = []; 
     $scope.searchText;
        
        $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers').then(function mySucces(response) {
         $scope.customers = response.data;
         $scope.dataLoading = false;
        }, function myError(response) {
         $scope.customer = response.statusText;
         $scope.dataLoading = false;
        });        
        
        self.add = function (customerId) {
         $scope.selectedCustomer = customerId;
         $scope.customerDelete = true;
         $http.delete('/spring-boot-aws/customers/' + customerId).then(function onSucces(response) {
             $scope.customers = _.without($scope.customers, _.findWhere($scope.customers, {id: customerId}));
             $scope.customerDelete = false;
            }, function onError(){
             
            });
        },
        
        $scope.searchFilter = function (obj) {
            var re = new RegExp($scope.searchText, 'i');
            return !$scope.searchText || re.test(obj.firstName) || re.test(obj.lastName.toString());
        };
    });
    
    springBootAws.filter('formatDate', function() {
     return function(input) {
      return moment(input).format("DD-MM-YYYY");
     };
    });
    
    springBootAws.config(function ($routeProvider) {
        $routeProvider.when('/home', {templateUrl: 'pages/home.tpl.html'});
        $routeProvider.when('/create-customer', {templateUrl: 'pages/createCustomer.tpl.html'});
        $routeProvider.when('/view-customer/:customerId', {templateUrl: 'pages/viewCustomer.tpl.html'});
        $routeProvider.when('/view-all-customers', {templateUrl: 'pages/viewAllCustomers.tpl.html'});
        $routeProvider.otherwise({redirectTo: '/home'});
    });
    
}());

The controller logic handles the 3 main views in the application - create customer, view customer and view all customers.
  • CreateCustomerCtrl uses model data populated in the view to build a FormData object and performs a HTTP POST to the create customer endpoint defined earlier. In the success callback there is a transition to the view customer route, passing the target customer Id in the URL.
  • ViewCustomerCtrl uses the customer Id passed in the URL and issues a HTTP GET to the getCustomer endpoint defined earlier. The response JSON is added to scope for display.
  • ViewAllCustomersCtrl issues a HTTP GET to the getAllCustomers endpoint to retrieve all customers. The response JSON is added to scope for display in a tabular view. The delete method takes the selected customer Id and issues a HTTP DELETE to the removeCustomer endpoint to remove the customer from the database and to remove the uploaded image from S3. 
The demo app is now complete so its time to turn our attention to AWS so that we can configure the the RDS database instance and S3 resources needed.

Part 2 - Relational Database Service & S3 Storage Setup

In this section you'll need access to the AWS console. If you haven't already done so you should register for a free account. We're going to step through the RDS database instance set up and the creation of a new storage bucket in S3. By the end of this section you should have the application running locally, hooked up to an  RDS database instance and S3 storage.

Creating a Security Group to access RDS

Security groups provide a means of granting granular access to AWS services. Before creating a database instance on RDS we need to create a security group that will make the database accessible from the internet. This is required so that the application running on your local machine will be able to connect to the database instance on RDS.
Note: in a production environment your database would never be publicly accessible and would only be accessible from EC2 instances within your Virtual Private Cloud.

1. Log into the AWS console and on the landing page select EC2.
AWS Console - Landing Screen
2. Select Security Groups from the menu on the left hand side.
EC2 Landing Screen
4. Click Create Security Group.
Security Groups Screen

5. Enter a security group name and a meaningful description. Next select the default VPC (denoted with a *). A VPC (Virtual Private Cloud) allows users to configure a logically isolated network infrastructure for their applications to run on. Each AWS account comes with a default VPC so you don't have to define one to get started. For the sake of this demo we'll stick the default VPC.
Next we'll specify rules that will define the type of inbound and outbound traffic permitted by the security group. We need to define a single inbound rule that will allow TCP traffic on port 3306 (port used by MySQL). In the rule config below I've set the inbound Source to Anywhere, meaning that the database instance will accept connections from any source IP. This is handy if you're connecting to to a development database instance from public WIFI where your IP will vary. In most cases we'd obviously narrow this to a specified IP range. The default outbound rule allows all traffic to all IP addresses.
Create Security Group
6. From the main AWS dashboard click RDS. On the main RDS dashboard click Launch a DB Instance.
RDS Dashboard Landing Screen

7. Select MySQL as the DB engine.
RDS - Select Database Engine

8. Select the Dev/Test option as we don't need advanced features like multi availability zone deployments for our demo.
Select Database Type

9. In the next section we define the main database instance settings. We'll retain most of the default settings so I'll describe only the most relevant settings below.
  • DB Instance Class - the size of the DB instance to launch. Choose T2 Micro as this is currently the smallest available and is free as part of free tier usage. 
  • Multi AZ Deployment - indicates whether or not we want the DB deployed across multiple available zones for high availability. We don't need this for a simple demo. 
  • Storage Type - the underlying persistence storage type used by the instance. General purpose Solid State Drives are now available by default so we'll use those. 
  • Allocated Storage - The amount of physical storage available to the database. 5GB is suffice for this demo.  
  • DB Instance Identifier - the name that will uniquely identify this database instance. This value is used by the AWSResourceConfig class we looked at earlier.
  • Master Username - the username we'll use to connect to the database.
  • Master Password - the password we'll use to authenticate with.
Database Instance Settings
10. Next we'll configure some of the advanced settings. Again we'll be able to use many of the default values here so I'll only describe the settings that are most relevant.
  • VPC - Select the default VPC. We haven't defined a custom VPC as part of this demo so select the default VPC option.
  • Subnet Group - As we're using the default VPC we'll also use the default subnet group.
  • Publicly Accessible - Set to true so that we can connect to the DB from our local dev environment.
  • Availability Zone - Select No Preference and allow AWS to decide which AZ the DB instance will reside. 
  • VPC Security Groups - Select the Security Group we defined earlier, in this case demo-rds-sec-group. This will apply the defined inbound and outbound TCP rules to the database instance.
  • Database Name - select a name for the database. This will be used along with the database identifier we defined in the last section to connect to the database. 
  • Database Port - Use default MySQL port 3306.
  • The remaining settings in the Database Options section should use the defaults as shown below.
  • Backup - Use default retention period of 7 days and No Preference for backup window. Carefully considered backup settings are obviously very important for a production database but for this demo we'll stick with the defaults. 
  • Monitoring & Maintenance - Again these values aren't important for our demo app so we'll use the defaults shown below.  
Database Instance Advanced Settings
11. Click Launch DB Instance and wait for a few moments while instance is brought up. Click View Your DB Instance to see the configured instance in the RDS instance screen. 
Database Instance Created
12. In the RDS instances view the newly create instance should be displayed with status Available. If you expand the instance view you'll see a summary of the configuration details we defined above.
Configured DB Instance


Connecting to the database & creating the schema


Now that the database instance is up and running we can connect from the command line. You'll need MySQL client locally for this section so if you don't already have it installed you can get here.
  • cd to MY_SQL_INSTALL_DIRECTORY\mysql-5.7.11-winx64\bin. 
  • Here is a sample connection command mysql -u briansjavablog1 -h rds-sample-db2.cg29ws2p7rim.us-west-2.rds.amazonaws.com -p
  • Replace the value following -u with the username you defined as part of the DB instance configuration. 
  • Replace the value following -h with the DB host of the instance you created above.  The host is displayed as Endpoint on your newly created DB instance (see screenshot above). Note: The Endpoint displayed in the console includes the port number (3306). When connecting from the command line you should drop this portion of the endpoint as MySQL will use 3306 by default (see screenshot below).    
  • When prompted enter the master password that you defined as part of the DB instance configuration above. 

Once connected run the show databases command and you will see the rds_demo instance we created in the AWS console. Running use rds_demo and then show tables should return no results as the schema is empty. We can now create the schema by running the SchemaScript.sql from src/main/resources.  SchemaScript.sql creates 3 tables that correspond to the 3 JPA entities created earlier and is defined as follows.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
DROP SCHEMA IF EXISTS rds_demo;
CREATE SCHEMA IF NOT EXISTS rds_demo DEFAULT CHARACTER SET utf8;
USE rds_demo;

CREATE TABLE IF NOT EXISTS rds_demo.app_address (
  id INT NOT NULL AUTO_INCREMENT,  
  street VARCHAR(40) NOT NULL,
  town VARCHAR(40) NOT NULL,
  county VARCHAR(40) NOT NULL,
  postcode VARCHAR(8) NOT NULL,
  PRIMARY KEY (id));
  
CREATE TABLE IF NOT EXISTS rds_demo.app_customer_image (
  id INT NOT NULL AUTO_INCREMENT,
  s3_key VARCHAR(200) NOT NULL,
  url VARCHAR(1000) NOT NULL,
  PRIMARY KEY (id));  
  
CREATE TABLE IF NOT EXISTS rds_demo.app_customer (
  id INT NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(30) NOT NULL,
  last_name VARCHAR(30) NOT NULL,
  date_of_birth DATE NOT NULL,
  customer_image_id INT NOT NULL,
  address_id INT NOT NULL,
  PRIMARY KEY (id),
  CONSTRAINT FK_ADDRESS_ID
    FOREIGN KEY (address_id)
    REFERENCES rds_demo.app_address (id),
  CONSTRAINT FK_CUSTOMER_IMAGE_ID
    FOREIGN KEY (customer_image_id)
    REFERENCES rds_demo.app_customer_image (id));    

Run the script with source ROOT\spring-boot-aws\src\main\resources\SchemaScript.sql. Running show tables again should display 3 new tables as shown below.
Create Database Schema

Creating an S3 storage bucket


Now that the database instance is up and running we can look at setting up the S3 storage. On the main AWS management console select S3 under the storage and content delivery section. When the S3 management console loads, click Create Bucket.
S3 Management Console
Enter a bucket name and ensure its matches the name specified in FileArchiveService.java we defined earlier. If you're running the sample code straight from github then the bucket name should be brians-java-blog-demo as shown below.
Create S3 Bucket
Click Create and the new bucket will be displayed as shown below.
New S3 Bucket

Running the application locally


Its preferable to run the application locally before attempting to deploy it to EC2 as it helps iron out any issues with RDS or S3 connectivity.

In order to run the application we need to supply application properties on start-up.  The properties are defined below and are set based on the values used to create the database instance and the access keys associated with your account.

1
2
3
4
5
6
7
{
 "database-name": "rds_demo",
 "db-instance-identifier": "rds-demo",
 "rdsPassword": "rds-sample-db",
 "accessKey": "XXXXXXXXXXXXXXXXXXXX",
 "secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}

Boot allows you to supply configuration on the command line via a the -Dspring.application.json system variable.

1
java -Dspring.application.json='{"database-name": "rds_demo","db-instance-identifier": "rds-demo","rdsPassword": "rds-sample-db","accessKey": "XXXXXXXXXXXXXXXX","secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}' -jar target/spring-boot-aws-0.1.0.jar

You can also supply configuration via the SPRING_APPLICATION_JSON environment variable. An example of supplying the environment variable and running the application in STS is shown below.

Environment Variable Configuration
At this point you should have the application up and running. When the application starts it will establish a connection with the database instance on RDS. Navigate to http://localhost:8080/spring-boot-aws/#/home and you should see the home screen.

Home Screen
Check that everything is working by clicking the Create New Customer link in the header to add a new customer.
Create Customer View
After saving the new customer you'll be taken to the view customer screen.
View Customer
Clicking the customer image will open a new tab where you'll see that the image being referenced directly from S3 storage.
Customer Image From S3 Storage
Note the structure of the URL is as followings

https://<s3_bucket_name>s3<region>.amazonaws.com/<item_key>?AWSAccessKeyId=....
  • Bucket name - the value used to create the bucket in the AWS console. 
  • Region - the region associated with your AWS account
  • Item Key - the key we construct at runtime while saving the customer image. We looked at this logic earlier in the FileArchiveService.
To view all customers click the View All icon at the top of the screen.
View All Customers
Here you can search for customers, view a specific customer or delete a customer using the icons on the right hand side.

Deploying the application to EC2

Once everything is working locally you should be ready to deploy the application to the cloud. This section takes you through a step by step guide to creating a new EC2 instance and deploying the application. Lets get started.

Create a role for EC2

  • Before we create the EC2 instance we'll create a Role through Identity Access Management. The role will be granted to the EC2 instance as part of the set up and will allow access to the database instance on RDS and S3 storage.  
  • Log into the AWS console and navigate to Identity Access Management.
Identity Access Management Console
  • On the left hand side select Roles and click Create New Role  
Create New Role
  • Enter the role name rds-and-s3-access-role  
Set Role Name
  • Select Role Type Amazon EC2 
Select Role Type
  • Attach AmazonS3FullAccess and AmazonRDSFullAccess policies to the role to allow read/write access to RDS and S3. 
Attach Policies for RDS and S3
  • Review the role configuration and click Create Role.
Create Role
Creating an EC2 Instance

Now that we've created a role that will provide read/write access to RDS and S3, we're ready to create the EC2 instance.
  • Navigate to the EC2 console and click launch instance.
  • Choose the AWS Linux AMI. This is the base server image we'll to create the EC2 instance.   
Select Amazon Machine Image
  • To keep costs down select t2 micro as the instance type. This is a pretty light weight instance with limited resources but is sufficient for running our demo app.  
Select EC2 Instance Type
  • We only need one instance for the demo and can deploy it to the default VPC. Ensure that auto assign public IP is enabled, either via Use Subnet Setting or explicitly. This is required so that the instance can be accessed from the internet. Select the rds-and-s3-access-role IAM role we created earlier so that RDS and S3 services can be accessed from the instance. The remaining settings can be left defaulted as shown below. When all values have been selected click Next:Add Storage.
Configure EC2 Instance
  • Use the default storage settings for this instance and click Next:Tag Instance
Add Storage to EC2 Instance
  • Add a single tag to store the instance name and click Next:Configure Security Group
Tag EC2 Instance
The security group settings define what type of traffic is allowed to access your instance. We need to configure SSH access to the instance so that we can SSH onto the box to set it up and run the application. We also need HTTP access so that we can access the application once its up and running. The Source value specifies what IPs the instance will accept traffic from. I spend quite a bit of time on the train (public WI-FI) where the IP address changes regularly, so for handiness I'm leaving the Source open. Ordinarily we'd want to limit this value so that the instance is not open to the world.
Configure EC2 Security Group
  • The final step is to review the configuration settings and click Launch. 
Review and Launch Instance
  • You'll be prompted to select a key pair that will be used to SSH onto the EC2 instance. If you don't already have a key pair you can create one now. 
Select Key Pair
  • Click Launch Instance to display the launch status screen shown below. At this point the instance is being created so you can navigate back to the EC2 instance landing screen.    
Launch Instance Summary
  • Returning to the instance landing screen you should see the instance with state initializing. It may take a few minutes before the instance state changes to running and is ready to use.
Instance Initializing
  • When the instance state changes to running the instance is ready to use. Open the description tab and get the public IP that will be used to SSH onto the instance.   
Instance Running
  • Open a command prompt and SSH onto the instance with ssh <ip_address> -l ec2-user -i <my_private_key>.pem as shown below.
SSH onto Instance
  • Once we're connected to the instance we need to do some basic setup. Switch to the root user, remove the default Java 7 JDK that comes bundled with the Amazon Machine Image and install the Java 8 JDK.
1
2
3
sudo su
yum remove java-1.7.0-openjdk -y
yum install java-1.8.0
  • The EC2 image should now be ready to use, so all that remains is to copy up our application JAR and run it. On the command line use SCP to copy the application JAR to the EC2 instance..
Copy Application JAR to EC2 Instance
  • When you SSH onto the EC2 instance the spring-boot-aws-0.1.0.jar should be in /home/ec2-user/. Launch the application by running the same command you ran locally, not forgetting to supply the application config JSON.
Running the Application on EC2 Instance
  • When the application starts you should be able to access it on port 8080 using the public DNS displayed in description tab of the EC2 instances page. 
Access Application on EC2
In a production environment we wouldn't access the application directly on the EC2 instance. Instead we'd configure an Elastic Load Balancer to route and distribute incoming traffic across multiple EC2 instances. That however is a story for another day.

Summary

We've covered quite a bit in this post and hopefully provided a decent introduction to building and deploying a simple application on AWS.  EC2, RDS and S3 are just the tip of the iceberg in terms of AWS services, so I'd encourage you to dive in and experiment with some of the others. You could even use the demo app created here as a starting point for playing around with the likes of SQS or Elastic Cache.  As always I'm keen to hear some feedback, so if you have any questions on this post or have suggestions for future posts please let me know.

Comments

  1. Very nicely written explaining A to Z. I used a user with access to S3 and RDS which I used instead of assigning the role to EC2 instance.

    ReplyDelete
  2. Thanks, hope you found it useful. I like the idea of being able to assign a role to the instance and then tweak the permissions associated with that role. You're approach works well too, just a different way of achieving the same thing.

    ReplyDelete
  3. You sir, are the hero java needs not the one it deserves. Thanks a lot for this.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. Brain,
    Simple and to the point, Neatly Explained. Wonderful.Just curious to know, will this setup works with AWS Free Account. I wanna try your approach.

    ReplyDelete
  6. Yes, this will work with the AWS free tier. Once you register you'll have access to all the services you need, in this instance S3, RDS and EC2. You'll find the free tier allowances quite generous and unless you're building something substantial, you shouldn't need to pay for anything for quite a while.

    ReplyDelete
    Replies
    1. where do I place the credentials to access S3 and my sql when running the app on EC2.

      Delete
  7. Thanks Brain. I followed your instructions and when i wanted to run locally, i executed mvn clean install to create jar file and you mentioned i ran
    java -Dspring.application.json='{"database-name": "rds_demo","db-instance-identifier": "rds-demo","rdsPassword": "rds-sample-db","accessKey": "*************","secretKey": "**************"}' -jar target/spring-boot-aws-0.1.0.jar command as well. But after running above command i got Error stating.

    Error: Could not find or load main class rds_demo,db-instance-identifier:

    Any help ?

    ReplyDelete
    Replies
    1. Post the full exception stacktrace here and I'll have a look.

      Delete
  8. It looks like my Account is not entitled to Use RDS instance, Any Idea how to resolve it ?

    ReplyDelete
    Replies
    1. You definitely have access to RDS as part of the free tier. See https://aws.amazon.com/rds/free.

      Delete
  9. Thank you very much! This instruction is well written and easy to understand.

    Quick question, is it possible to use this with mybatis instead of JPA? also if I export this using maven install into a war file and deploy it in my elastic beanstalk, would it make any difference?

    ReplyDelete
    Replies
    1. Changing the persistence tier should be straight forward. You can still use Spring clound to establish a connection with your RDS instance and create a DataSource. The DataSource can then be used by mybatis or any other Java persistence framework.

      Delete
  10. How can I use this with Elastic Beanstalk?

    ReplyDelete
    Replies
    1. I haven't looked into Elastic Beanstalk much as I prefer the fleixility and control of being able to spin up my own EC2 instance and configure it from scratch. If you want to deploy to Elastic Beanstalk you should be able to do so with the executable JAR you've already built, without the need to build a WAR.

      Delete
  11. Does Lombok have to be installed for the code to build? For some reason the getter/setters aren't working for me. I assumed Maven would take care of this.

    ReplyDelete
    Replies
    1. If you're running a maven build from the command line it should just work as is, with Maven pulling in the required dependency. To make the code compile in an IDE you'll need to install lombock. See https://projectlombok.org/download.html for details on install in in Eclipse and IntelliJ.

      Delete
    2. I was able to use these instructions to install Lombok into Spring Tool Suite and get it working in the IDE: http://committed.software/blog/2015/10/installing-lombok/

      Delete
    3. Thank you, I too -- after reading this, installed Lombok, and the code the could compile. It does not run though. I am viewing the Spring Boot console log errors exceptions. Any chance you might review with me to resolve the issues, please?

      Delete
  12. Hi, I'm trying to run this locally and I keep getting an error "No database instance with id:'rds-sample-db2' found. Please specify a valid db instance". I am able to connect to the db from MySQL workbench and I've triple check the db instance name is what is in AWS. Any advice is greatly appreciated!

    ReplyDelete
    Replies
    1. Brian, I got this working! I had to specify the region in the aws-config.xml file and that got it working. You should definitely add this to the blog post as anyone not setting up their instance on the default region will run into this problem.

      Delete
    2. So you ran against an RDS instance in the end? Did you get it working with your local MySQL instance? The @EnableRdsInstance annotation from Spring cloud is specifically for running against RDS, not local instances. If you want to run against a local instance I'd suggest setting up a standard javax.sql.DataSource and using that. You could use @Profiles to switch between your local and RDS data source depending where you're deployed.

      When I was building this sample I didn't use a local data source at all, just my RDS instance.

      Delete
  13. Is there a simple way to associate the app's output to a specific domain? I thought I was on the right track by setting the S3_BUCKET_NAME in the FileArchiveService class, but no.

    Anyways, this lesson is really impressive as is. Thank you very much!

    ReplyDelete
  14. Not sure exactly what you're asking. Are you trying to associate the S3 bucket with a specific region?

    ReplyDelete
    Replies
    1. I want to access the app via "http://mydomain.com/"

      Delete
  15. @Brian Can you answer the question below:
    I am getting Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
    on line

    s3Client.putObject(new PutObjectRequest(S3_BUCKET_NAME, key, fileToUpload));

    Can you help me?

    ReplyDelete
    Replies
    1. A few things you should check.
      1. Have you supplied the accessKey and secretKey correctly?
      2. In the AWS console, make sure you have enabled the List, Upload/Delete and View check boxes in the Permissions section.

      Delete
  16. Very good blog. I am stuck at "Running the application locally". where can I find my access key and secret key? please advice

    ReplyDelete
    Replies
    1. You'll get them from the Identity Access Management section in the AWS console.

      Delete
    2. I am getting error.
      2017-02-02 09:27:17.323 WARN 8820 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration': Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '${db-instance-identifier:}': Invocation of init method failed; nested exception is com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
      2017-02-02 09:27:17.328 INFO 8820 --- [ main] o.apache.catalina.core.StandardService : Stopping service Tomcat

      I am using eclipse neon.1 and added SPRING_APPLICATION_JSON with json values and using run as configuration

      Delete
    3. Hello Brian! Thank you very much for the great blog! Following the above steps I faced with the same issue as Krish:

      "Error creating bean with name '${db-instance-identifier:}': Invocation of init method failed; nested exception is com.amazonaws.AmazonServiceException: The security token included in the request is invalid. (Service: AmazonRDS; Status Code: 403; Error Code: InvalidClientTokenId;"

      I created root access ID and secretKey and used them in the SPRING_APPLICATION_JSON...
      I also tried the same with IAM User, but still the same error.
      Do you have an idea?

      Thanks and best regards,
      Roman

      Delete
    4. I am also getting the same error:
      "Error creating bean with name '${db-instance-identifier:}': Invocation of init method failed; nested exception is com.amazonaws.AmazonServiceException: The security token included in the request is invalid. (Service: AmazonRDS; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: f0c36ab8-7d00-11e7-82e3-e9e7f259faa6)"

      Delete
  17. Hello Brian, what is the recommended approach jar or war, if we install tomcat on EC2. Also what is the most secure way to store aws keys, if we use war. prop file? please throw some light

    ReplyDelete
    Replies
    1. It really depends on the project but both are viable options. By default I'd be inclined to go with a simple JAR deployment as its lightweight and removes the overhead of having to manually install and manage a Servlet container. Starting the application from the command line is quick, easy and one of the niceties of working with Boot. However, there are times when a manually installed Servlet container and WAR deployment make sense. For example, if your organisation has a base server configuration that it mandates for all projects, you'll be required to use it (not uncommon in large organisations). In this type of scenario a WAR deployment makes sense.

      In terms of the secret key, I would discourage putting it in an application properties file. The file will likely end up in source control where it will be visible to anyone with access to your repository. There are a number of different ways to set the secret key when running on EC2, one of which is to use environment variables. These can be set in the OS command line or configured in the AWS console.See the AWS documentation for more details.

      Delete
  18. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. See the section above about creating a security grouping for accessing RDS. You'll notice that I've talked about the inbound rule that needs to be configured to tell RDS where it should accepts request from. See snippet below.

      "We need to define a single inbound rule that will allow TCP traffic on port 3306 (port used by MySQL). In the rule config below I've set the inbound Source to Anywhere, meaning that the database instance will accept connections from any source IP. This is handy if you're connecting to to a development database instance from public WIFI where your IP will vary. In most cases we'd obviously narrow this to a specified IP range."

      Have you done this already?

      Delete
  19. Thank you. Shall do! and reply upon success. Thank you.

    ReplyDelete
    Replies
    1. It WORKED! I searched high and low for the solution, and YOU HAD it here. Thank you! Can now connect to AWS RDS from any public wifi!

      https://www.useloom.com/share/550c60b0f47e11e6836d3dae7531e5d2

      Delete
  20. This comment has been removed by the author.

    ReplyDelete
  21. This comment has been removed by the author.

    ReplyDelete
  22. This comment has been removed by the author.

    ReplyDelete
  23. This comment has been removed by the author.

    ReplyDelete
  24. This comment has been removed by the author.

    ReplyDelete
  25. This comment has been removed by the author.

    ReplyDelete
  26. Connectivity log in to AWS RDS, using Spring Run Configuration environment variable. Is the db-instance-identifier the "username" ?

    https://www.useloom.com/share/cb14bb20f48511e696091fe5f9321a04

    ReplyDelete
  27. Hi Brian, I followed everything in your tutorial -- I cannot:

    "At this point you should have the application up and running."

    May I please share the individual issues, with the hope of being able to accomplish everything in this wonderful tutorial?

    ReplyDelete
  28. Hi Brian,
    I follow your tutorial and it looks pretty good on my personal RDS Database.
    Now i have the following problem :
    - I need to access a RDS MySQL Database that is on port : 33306 .How can i specify the mysql port in the App ?
    - The App is only accessible by LAN. I define a bean on AwsResourcesconfig.java
    --> AmazonRDSClient that return the client configuration with the Proxy host and Port. And in the aws-config.xml i define a bean "amazonRDS" with class"com.amazonaws.services.rds.AmazonRDSClient" with autowire-candidate="true" and autowire="constructor". It's a good way to bypass the proxy ?

    Thanks

    ReplyDelete
  29. First of all, thank you for a well written article. I used it to get me started with AmazonAWS. I changed my database to use PostgreSQL just to make it interesting I had it up and running in a day.

    Best Wishes,

    Russ

    ReplyDelete
  30. I have a question.EC2 has its own tomcat instance and spring not has its own so how to handle this

    Means if I want to enable SSL in spring boot so where I should enable it AWS or Spring boot tomcat.

    ReplyDelete
  31. Spring Boot uses an embedded Servlet container, Tomcat by default. While you're free to install Tomcat on your EC2 instance, you don't have to. You can simply use the embedded Tomcat by launching your boot app from the command line.

    ReplyDelete
  32. Hi Brian,This was a wonderfull explanation i reuse most of the code and it working well .Could you please share this full application so that it can be usefull for us like newbies to the spring with aws cloud.And also it will be helpfull for me if you can have image rekognizition service usage in your blog.

    ReplyDelete
  33. Cheers Brian, managed to implement saving of a user avatar into AWS S3 using piece sof your example, many thanks

    ReplyDelete
  34. Hi Brian, great article. I'm trying to do something slightly different and upload images directly from the browser to s3. The problem I'm facing is that the region I'm using requires a v4 authentication. Any idea how to create the presigned url using v4 authentication?

    ReplyDelete
  35. Actually I've answered my own question... Updating the spring-cloud-aws-version from the example to 1.1.4 changes the java sdk used which defaults to signing with v4

    ReplyDelete

Post a Comment

Popular posts from this blog

Health Checks, Metrics & More with Spring Boot Actuator

Spring Web Services Tutorial

Spring JMS Tutorial with ActiveMQ

Axis2 Web Service Client Tutorial

REST Endpoint Testing With MockMvc

Java Concurrency - Multi Threading with ExecutorService

An Introduction to Wiremock

Spring Batch Tutorial

Externalising Spring Configuration