Getting Intellisense from JavaScript File

Let’s say you have main.js and you want intellisense from objects defined in BankAccount.js. You would have to do the following in main.js:

/// <reference path="BankAccount.js" />
 
function run() {
 
  var ba = new BankAccount();
 
}

and BankAccount.js is just:

function BankAccount() {
 
  var total = 0;
 
  this.GetTotal = function () {
    return total;
  }
 
  this.Deposit = function (n) {
    this.total = n;
  }
 
}

This is provided that both .js files are in the same directory – but they don’t have to be – use relative path?

Learn more about it here: http://msdn.microsoft.com/en-us/library/bb385682.aspx

Testing JavaScript Code in Sublime Text with Node.js

Testing JavaScript code can easily be done now with today’s browser in console mode. If you want to this within an IDE like Sublime Text from the press of a button, read on. 🙂

The first thing you’ll need is a JavaScript interpreter like Node.js. Get it from http://nodejs.org/ . Then in Sublime Text go to:

Tool -> Build System -> New Build System

Create a new build system file. In this example, we’ll use JavaScript.sublime-build

By default, it’ll create it under the path:

C:\Users\Dan\AppData\Roaming\Sublime Text 2\Packages\User

Once you do that, configure the build file using the settings from the Sublime Text official docs.

So your config file will look like this:

{
  "cmd": ["C:/Program Files (x86)/nodejs/node.exe", "-p", "$file"]  
}

and that should be it.

So now when you press F7, the build system will run and the output will be shown like so:

View Bound Events from an Element in jQuery

The .data() method is a neat way to find out what data is associated with an element. For example, let’s say we have the following HTML:

<a href="http://www.google.com" id="google-link">Click me to proceed.</a>

and we run the following:

$("#google-link").data("events");

We’ll get (in Chrome’s console):

We get an undefined because there are no events bound to it. Now, let’s add some events:

    $("#google-link").click(function(e) {      
      alert("Google link was clicked.");
      e.preventDefault();
    });
 
    $("#google-link").hover(function(e) {      
      alert("Google link was hovered.");
      e.preventDefault();
    });

Now let’s view the page and run the data method again in the console.

$("#google-link").data("events");

The results:

Thus it’s very handy, especially when you’re using plugins and they misbehave by randomly hooking themselves into elements, getting in your way.

MongoDB Console Tip: Better Display of Object on Console

Quick tip. This is just a neat way to better display the JSON from the console. For example, if I did this in one of my dbs:

db.client.find()

I would get:

{ "_id" : ObjectId("4da45b3cf980ed161462c2ef"), "ContactEmergencyDetails" : { "awesome" :
[ 1111, 2222, 3333 ], "cool" : "awesome", "dan" : [ 4.5, 5.322 ] }, "firstname" : null, "lastname" : null }

But if I changed it to this (converting it to Array):

db.client.find().toArray()

It formats it better:

[
  {
    "_id" : ObjectId("4da45b3cf980ed161462c2ef"),
    "ContactEmergencyDetails" : {
      "awesome" : [
                    1111,
                    2222,
                    3333
                ],
        "cool" : "awesome",
        "dan" : [
                  4.5,
                  5.322
                ]
  },
  "firstname" : null,
  "lastname" : null
  }
]

Event Delegation/Propagation (Bubbling) via jQuery

I was going through the book JavaScript Programmer’s Reference by Alexei White and noticed two diagrams that explain the event delegation and event propagation:

So rather than binding an event handler to each tr or td, you bind only to the table element, and capture the event (via bubbling), detect the element that fired it, and then fire the event handler. This is more efficient than binding to each and every tr or td – instead you rely on the action bubbling up to one element.

Let’s demonstrate this using jQuery:

<html>
<head>
<script language="JavaScript" src="http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"></script>
 
<script type="text/javascript">
$(document).ready( function() {
 
 // http://api.jquery.com/event.target/
 // http://docs.jquery.com/Events_%28Guide%29
 
 $("#datatable").click( function(event) {    
   // Detect the element that fired off the event
   if ( event.target.nodeName == "TD" )
   {
     alert( "you clicked on a td element" );
   }
 });
});
 
</script>
 
 
</head>
 
<body>
 
<table id="datatable" border="1">
 <tr>    
   <th>Header 1a</th>
   <th>Header 1b</th>
   <th>Header 1c</th>
   <th>Header 1d</th>
   <th>Header 1e</th>
 </tr>
 
 <tr id="row1">
   <td id="aaa">Data 1a</td>
   <td>Data 1b</td>
   <td>Data 1c</td>
   <td>Data 1d</td>
   <td>Data 1e</td>
 </td>
 
 <tr id="row2">
   <td>Data 1a</td>
   <td>Data 1b</td>
   <td>Data 1c</td>
   <td>Data 1d</td>
   <td>Data 1e</td>
 </td>
 
 <tr id="row3">
   <td>Data 1a</td>
   <td>Data 1b</td>
   <td>Data 1c</td>
   <td>Data 1d</td>
   <td>Data 1e</td>
 </td>
 
 <tr id="row4">
   <td>Data 1a</td>
   <td>Data 1b</td>
   <td>Data 1c</td>
   <td>Data 1d</td>
   <td>Data 1e</td>
 </td>
 
</table>
</body>
</html>

Using CKEditor in ASP.NET MVC

If you’re using ASP.NET MVC, setting up CKEditor is pretty straightforward from NuGet, especially if you’re using jQuery as well.

1. First, install it from NuGet:

2. By default, it will put the files under the Scripts directory. I prefer to have it under /js/libs, so I move it there after it’s done downloading.

3. After that, select a view (.cshtml) file and after jquery is loaded, include the following .js files:

<script src="../../js/ckeditor/ckeditor.js" type="text/javascript"></script>
<script src="../../js/ckeditor/adapters/jquery.js" type="text/javascript"></script>

4. Now, assuming you have a textarea of id=”message”:

<textarea id="message"></textarea>

apply the following method:

$("#message").ckeditor();

5. At this point, you’re done with CKEditor with the default settings.

6. If you want to configure the editor, update the config.js. The following makes the textarea 500px and shows a limited number of tool buttons:

CKEDITOR.editorConfig = function (config) {
  config.width = 500;
  config.toolbar =
  [
    [
      'SourceBold',
      'Italic',
      'Underline',
      'Strike',
      '-',
      'Subscript',
      'SuperscriptNumberedList',
      'BulletedList',
      '-',
      'Outdent',
      'Indent/Styles',
      'Format',
      'Font',
      'FontSize'
    ]
  ];
};

This is what it’ll look like:

You might want to check out the other configuration settings for the toolbar. Check out the settings and jQuery CKEditor adapter.

Quick Simple Way to Show/Hide with a Checkbox using jQuery

Here’s a quick and simple way to do this. I often have to show and hide set of panels based on whether something is checked or not. jQuery code is pretty straightforward and it comes in handy.

$( function() {    
 
  function toggleCheckbox( trigger, hidethese )
  {
    if ( $( trigger ).attr("checked") )
    {
      $( hidethese ).hide();
    }
    else
    {
      $( hidethese ).show();
    }
  }
 
  // jQuery object that has your checkbox
  var triggerCheckbox = $( "#showPreonDescription" );
 
  // jQuery object for collection of elements you want to hide
  // For this example, I could've also just done #panel by itself
  // to hide everything
  var panelsToHide = $("#subpanel,#maincontent,#nav");  
 
  // Call the function on click.
  triggerCheckbox.click(
    function ()
    {
      toggleCheckbox( triggerCheckbox, panelsToHide );
    }
  );
});

I have the demo if you want the source as well.

High Performance Websites

This was a great read. Certainly learned a lot of useful lessons on front-end optimizations. Written by Steve Souders (previously lead Optimization efforts at Yahoo!), who also developed YSlow! – a Firefox add-on (actually an add-on to the Firefox add-on called Firebug) that gives your web pages a grade from A through F and tells you how to make it better.

If you have a build script, you may be able to automate a lot of these things, like combining JS and CSS files, and optimize PNG files (check out http://www.phpied.com/give-png-a-chance/ to see how to optimize from the command line). If you’re going to optimize JavaScript, I would recommend YUI Compressor (http://developer.yahoo.com/yui/compressor/) since it’s not as greedy as Google’s Closure Compiler for JavaScript. The Closure compiler (http://code.google.com/closure/compiler/) is relatively new and you may get even smaller files, but if your JavaScript is complex, it may have bugs because it’s a greedy compiler.

Anywhoot, here’s what I got from it:

  1. Reduce as many HTTP requests as possible.
  2. Minify JavaScript (don’t obfuscate, because it’s more trouble than it’s worth for the benefits you get)
  3. Minify CSS and optimize it (reduce duplication).
  4. Future-expire resources for caching (PNG, JPG, GIF, JavaScript and CSS).
  5. Minify HTML (get away from tables)
  6. Put CSS at the top, put JavaScript at the bottom.
  7. For design components (background images, button images, nav images), use CSS sprites.
  8. Use PNG for design components (they compress better than GIF, have partial transparency, and can have greater color palettes).
  9. Gzip non-binary data like HTML.
  10. Combine CSS and JavaScript into single files to reduce HTTP requests.

A summary of his optimization rules are found here, but of course, it’s not as detailed as the book: http://stevesouders.com/hpws/rules.php .

Stoyan Stefanov, another prominent developer who’s written tons on JavaScript and optimization, published 24 articles this month on optimization. I find these articles invaluable. It’s recent and he does actual optimization tests and tells you what tools he uses. Here’s the site: http://www.phpied.com/performance-advent-calendar-2009/

Asynchronous Upload to Amazon S3 (S3)

Requirements

Make sure you have an Amazon S3 Account (get it from tech). JavaScript is mandatory for this to work (to be able to POST to two different domains) upon user submission.

Summary of Problem

This is a summary outlining the solution used in a video uploader. It entailed a form that would leverage the user of our Amazon S3 (S3) account. In addition, because the video files could be large (and to avoid CF limitations), a critical requirement was to upload to that page S3 directly. At the same time, the form field information had to be saved to our database. This meant doing a double post – one to the XYZ domain, and the other to s3.amazon.com. This cross-domain POST could only be done via AJAX.

Here’s visualization:

As you can see, once the user clicks “Submit”, there’s an AJAX HTTP POST to XZY server to save the fields to the database, and then the JavaScript runs a form.submit() on the current form to submit the file via POST to S3.

Introducing Amazon S3

Amazon S3 (Simple Storage Service) is a cloud service whose sole purpose is to store files. To store files into it, one can use its REST API or Admin Web Console (AWS).

Screenshot of the AWS console, with Amazon S3 tab selected.

One gets to the AWS console via an account (can get it from tech) and going to http://aws.amazon.com/s3/ and clicking “Sign in.”

While S3 has many advantages, there are a set of drawbacks as well. To summarize, here’s a list:

Benefits:

  • The max upload size for a file is 5GB. Plenty.
  • For all the times I’ve tested, uploads of different file sizes, everything has gone super smoothly – like butter – so definitely reliable.
  • Amazon is super scalable (as you may already know), so parallel uploading from one or many users is really no problem.
  • Would not affect performance of our servers – there could be many uploads, and they would go fast, without slowing down any other web sites on servers.
  • The speed of the upload is limited to the user’s computer’s specs and internet provider – much faster than our servers.
  • Files can be made secure and unreadable, not just through obscurity – this is sometimes tricky to implement in ColdFusion.

Drawbacks:

To summarize, the reason for some of the drawbacks, is because it’s doing a POST request directly from one domain (us) to another (s3.amazonaws.com). It’s not being channeled through our CF servers.

There are two ways to interact with S3: the REST API, and doing a direct POST. With the REST API, the upload data has to be channeled through a server first before sending to Amazon – this was not what we were looking for, since our servers have issues with large files. So we looked into removing ourselves as the middleman and sending the data directly to S3 – via POST.

Here are the drawbacks, mainly three:

  • If S3 detected an error in the upload, e.g. if the file is too large, there’s no default error page, just a redirect to an XML document hosted on s3.amazonaws.com. There’s no way to set an error page – it’s on Amazon’s to-do list for future release. One can’t even customize the look and feel of the XML document you’re redirected to. Side note: if the upload was successful, it gets redirected to a page you specify (at least there’s some control here).
  • Progress bar reusable code is scare. There’s tons of code out there to do this, however, I could not find one that could cross-domain post. With traditional AJAX, you’re only allowed to do a POST/GET if the URL you’re using is the same domain as the caller page. One could get the code for a progress bar plugin (as there are tons out there) and rewrite it to do a POST and work with Amazon S3 – but that would take a considerate amount of work.
  • Lack of documentation. There’s not enough documentation for handling POST requests in the official Amazon Developer docs, which makes troubleshooting difficult. Doing POST submits is a relatively new feature of Amazon S3, compared to the rest of their APIs.

So the largest hurdle is to code functionality to get around the error page, since some JavaScript magic has to be put in place. That would be another day or so of work just for that, I believe. I already have some code in place that I put together while testing. If we left it as-is, when the user uploads, and if there was an error, the user would see something like this:

Which would, of course, be nonsensical. If the file was too large, they would see a message that the file was too large within the message tags. The user would then have to hit back to return to the form.

We can try another way, probably the easiest. When the user hits submits, it starts showing the animated spinner as it’s uploading. Also, we can tell the user that if he encounters an error page, just hit the back button. Also, keep in mind that there’ll be validation in place before the upload to check for file extension, at the very least. The only edge case to seeing that XML error message is if the file they submitted is over the limit *AND*  they have JavaScript turned off (that overrides the JavaScript file extension validation).

Creating a Basic HTML that POSTs a File to Amazon

Step 1 – Create a Bucket / Folder / Object:
The first thing we need to do a is create a Bucket on S3. To do this the easy way, go to the AWS: https://console.aws.amazon.com/s3/home and create a bucket:

Buckets are where you store your objects (i.e. your files of any format). You can create a folder for further organization:

As you can see here, there are 4 folders here. We can double-click on step1_new_submissions folder and see the objects that are contained within this:

You can right-click on one of those objects (files) and click “Properties”:

Notice that a Properties panel will expand below. To the right, you have three tabs: Details, Permissions, Metadata.

If you click on the Permissions Tab you’ll notice that by default the file that was selected has following permissions set by user USERNAME:

Go back to the details tab and click on the Link:

You’ll notice that you’ll be taken to an XML document in your browser that has the following:

It’s because you have not let it public access. To give it public access, you click on the Permissions tab again, and click “Add more permissions” , set the Grantee to “Everyone” and choose Open/Download and Save.

You can also set the make multiple objects public. Select an object, hold the SHIFT key, then select the other object to select the objects in between. Select “Make Public”:

You can also upload an object manually via the “Upload” button:

Then click “Add more files”

As the file starts uploading, you’ll see the bottom panel show details about the file transfer:

Step 2: Setting up the HTML Form

The S3 REST API is very flexible, as long as you execute the proper method from your application, while at the same time sending the file over from your server to S3 (via a REST method with the correct URI). Traditionally, it would look like this:

Notice how there’s a middle-man that serves as the captor of the data submitted by the form and the file. Then, it sends it long to S3. The middle-man here is crucial. Double the total bandwidth is spent here – the bandwidth to go from the user’s machine to the web server (in this case CF), and then the bandwidth spent transferring the file to S3.

The advantage to this layout is that because the web server acts as a middle-man server, it can modify the data, change its filename, and slice-and-dice anything within the file because the file submitted has to go through it first. Once the middle-man is done, then it sends it to the S3. Drawback is that there’s wasted resources from the middle-man, not to mention there may be limitations on the middle-man to handle large files > 1GB .

As a solution, S3 has a POST method solution where you can POST the file directly to S3:

Setting up the Form tag

Let’s see how we can cross-domain (a domain other than ours) to S3. Rather than doing the following (to post to the same domain):

<form action=”#CGI.SCRIPT_NAME#” method=”post” enctype=”multipart/form-data”>

We do the following:

<form action=”http://s3.amazonaws.com/PastaVideos” method=”post” enctype=”multipart/form-data”>

Where “PastaVideos” is the name of the bucket.

The format of the object URI is as follows:

Step 3: Setting up the Other Form Fields

This is where things get interesting. In order to set up an HTML form that can upload straight to S3, there’s a set of required input fields in the form. They are as follows:


Optional Form Fields

IMPORTANT: If you add any other additional form fields, it will throw an error. If there is in fact a need to add extra form fields, which will be pasted to another server, then you must append the prefix “x-ignore-“. Let’s say for example I have three input fields I want S3 to ignore, then do as follows:

<input type="text" name="x-ignore-lastname" tabindex="2" class="textfield">
<input type="text" name="x-ignore-address1" tabindex="3" class="textfield">
<input type="text" name="x-ignore-address2" tabindex="4" class="textfield">
<input type="text" name="x-ignore-city" tabindex="5" class="textfield">

This is completely legal and will not throw errors.

Grabbing x-ignore- fields in ColdFusion

If you want to grab these form variables via ColdFusion, do something like

Form.x-ignore-lastname

Will not suffice because of the dashes. You’ll have use the bracket/quotes format:

Form[“x-ignore-lastname”]

to grab them.

Also to check for existence or set a default value,

<cfparam name=”form.x-ignore-lastname” default=”parker” />

Or

<cfparam name=”form[“x-ignore-lastname” default=”parker” />

will not work.

You’ll have to use StructKeyexists( Form, “x-ignore-termsagree” ) to check for existence.

HTML Form Example

Putting all variables together from the previous table, we get something something like as follows:

<input type="hidden" name="key" value="step1_new_submissions/9AAAAAAA-D633-0944-9FBCCCCC6CFB161B_${filename}" />  
<input type="hidden" name="acl" value="private" />
<input type="hidden" name="AWSAccessKeyId" value="0N16468ABC47JDAQ2902" />
<input type="hidden" name="policy" value="eyJleHBpcmF0aW9uIjogIjIwMTgtMTAtMjFUMDA6MDA6MDBaIiwKICAiY29uZGl0aW9ucyI6IFsgCiAgICB7ImJ1Y2tldCI6lyZWN0IjogImh0dHA6Ly90ZXN0LXBhc3RhdmlkZW9zLm1pbGxlbm5pdW13ZWIuY29tL3RoYW5rcy5jZm0ifQogIF0KfQ==" />
<input type="hidden" name="signature" value="2AAAA/BhWMg4CCCCC32fzQ=" />
<input type="hidden" name="content-type" value="video/mov" />
<input type="hidden" name="success_action_redirect" value="http://XYZ.com/thanks.cfm" />      
<!--- Ignore All This Stuff... --->
<input type="text" id="x-ignore-firstname" name="x-ignore-firstname" value="peter" />
<input type="text" name="x-ignore-lastname" value="parker" />

Using Amazon’s HTML POST Form Generator

Because setting up the above HTML for the form could be tricky, Amazon has a tool that easily generates the HTML for the above code.

The following is a screenshot of the tool. The form can be found at: http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html


So the first thing you do is:

1. Fill out the IDs:

Where AWS ID = Access Key ID   and AWS Key = Secret Access Key

2. The next thing we’ll do is fill in the POST URL:

3. The third step is the trickiest step. This is a JSON document that must adhere to the JSON spec. By default, there’s already a default boilerplate JSON document there. Let’s analyze what it means:

Let’s use one a real one from the test-pastavideos.XYZweb.com page:

You’ll notice that it has content-length-range, which checks the max size, in this case being 1 GB, and it also redirects to the index.cfm page when successful.

After you copy and paste that JSON policy, press “Add Policy”. Notice how the fields in the section “Sample Form Based on the Above Fields” has been populated.

4. The fields may look something like this:

5. Now click on Generate HTML:

Note that with the HTML code above, whatever the user uploads will be renamed to “testfile.txt” on S3. To retain the user’s filename, you have to switch it to value=”${filename}”

6. You then copy that generated HTML and paste it into your page. Add any necessary optional fields with the x-ignore-prefixes.

That should give you a basic template for uploading to S3.

NOTE: You cannot change the values of any <input> fields except except the key. More about this in the next section.

Assigning the a unique ID to the object

Keep in mind of these items when assigning a unique filename:

  • You cannot change the user’s filename in JavaScript – once the user selects a file from his computer, you cannot append a GUID because JavaScript will not let you set the value of a file textbox, you can only read.
  • You cannot change append the GUID prefix after the form is submitted, because the filename will go to S3’s server, and there’s no way to run conditional logic once it’s on S3.

To get around these limitations, you generate the GUID or rather a ColdFusion UUID, then append it to your filename. Let’s take a look at an example of the pasta video form:

First let’s show the JSON policy document:

{"expiration": "2018-10-21T00:00:00Z",
  "conditions": 
  [ 
    {"bucket": "PastaVideos"}, 
    ["starts-with", "$key", "step1_new_submissions/"],
    {"acl": "private"},
    ["starts-with", "$Content-Type", "video/mov"],
    ["content-length-range", 0, 10737774],
    {"success_action_redirect": "http://pastavideos.XYZ.com/index.cfm"}
  ]
}

And now the HTML / ColdFusion code:

<cfset Data.videoUUID  = CreateUUID() />        
<form id="videoform" action="http://www.XYZ.com/index.cfm" method="get" enctype="multipart/form-data"> 
 
<!--- Start of Amazon S3 specific variables. --->
<input type="hidden" name="key" value="#Data.key##Data.videoUUIDXX#_${filename}" />   
<input type="hidden" name="acl" value="#Data.acl#" />

In the code above, when the HTML is rendered, it will already have filename S3 will use when it’s finished uploading. Remember, this is not the value of the <input type=”file” /> box.  So the user uploads, it will look like this on the AWS Console:

Now why is the action set to http://www.XYZ.com/index.cfm and method set to get? The HTML has this set, but when the page is loaded, JavaScript immediately runs and changes action to http://s3.amazonaws.com/PastaVideos and method to post:

<cfoutput>
  <!--- Only change the form variables if JavaScript is turned on.  --->  
  $( "##videoform" ).attr( "action", "#Variables.postURL#" ); 
  $( "##videoform" ).attr( "method", "post" );     
</cfoutput>

This is so that if JavaScript is turned off, it doesn’t POST to S3.

Other Sources of Information

Amazon S3 POST Example in HTML

AWS

Helpful Resources

Test Form

Documentation Home Page

Workflow using WS3 policies

Helpful Posts

Asynchronous Upload to Amazon S3 (S3)

Last Updated: 11/29/2010

Author: Dan Romero

Table of Contents

Summary of Problem.. 2

Introducing Amazon S3. 2

Benefits: 3

Drawbacks: 3

Creating a Basic HTML that POSTs a File to Amazon. 5

Step 1 – Create a Bucket / Folder / Object: 5

Step 2: Setting up the HTML Form.. 9

Setting up the Form tag. 10

Step 3: Setting up the Other Form Fields. 11

Optional Form Fields. 13

Grabbing x-ignore- fields in ColdFusion. 13

HTML Form Example. 14

Using Amazon’s HTML POST Form Generator. 14

Assigning the a unique ID to the object. 18

Other Sources of Information. 19

Requirements

Make sure you have an Amazon S3 Account (get it from tech).

JavaScript is mandatory for this to work (to be able to POST to two different domains) upon user
submission.

Summary of Problem

This is a summary outlining the solution used in a video uploader. It entailed a form that would leverage the user of our Amazon S3 (S3) account. In addition, because the video files could be large (and to avoid CF limitations), a critical requirement was to upload to that page S3 directly. At the same time, the form field information had to be saved to our database. This meant doing a double post – one to the XYZ domain, and the other to s3.amazon.com. This cross-domain POST could only be done via AJAX.

Here’s visualization:

As you can see, once the user clicks “Submit”, there’s an AJAX HTTP POST to
XZY server to save the fields to the database, and then the JavaScript runs a
form.submit() on the current form to submit the file via POST to S3.

Introducing Amazon S3

Amazon S3 (Simple Storage Service) is a cloud service whose sole purpose is to store files. To store files into it, one can use its REST API or Admin Web Console (AWS).

Screenshot of the AWS console, with Amazon S3 tab selected.

One gets to the AWS console via an account (can get it from tech) and going to http://aws.amazon.com/s3/ and clicking “Sign in.”

While S3 has many advantages, there are a set of drawbacks as well. To summarize, here’s a list:

Benefits:

The max upload size for a file is 5GB. Plenty.

For all the times I’ve tested, uploads of different file sizes, everything has gone super smoothly – like butter – so definitely reliable.

Amazon is super scalable (as you may already know), so parallel uploading from one or many users is really no problem.

Would not affect performance of our servers – there could be many uploads, and they would go fast, without slowing down any other web sites on servers.

The speed of the upload is limited to the user’s computer’s specs and internet provider – much faster than our servers.

Files can be made secure and unreadable, not just through obscurity – this is sometimes tricky to implement in ColdFusion.

Drawbacks:

To summarize, the reason for some of the drawbacks, is because it’s doing a POST request directly from one domain (us) to another (s3.amazonaws.com). It’s not being channeled through our CF servers.

There are two ways to interact with S3: the REST API, and doing a direct POST. With the REST API, the upload data has to be channeled through a server first before sending to Amazon – this was not what we were looking for, since our servers have issues with large files. So we looked into removing ourselves as the middleman and sending the data directly to S3 – via POST.

Here are the drawbacks, mainly three:

If S3 detected an error in the upload, e.g. if the file is too large, there’s no default error page, just a redirect to an XML document hosted on s3.amazonaws.com. There’s no way to set an error page – it’s on Amazon’s to-do list for future release. One can’t even customize the look and feel of the XML document you’re redirected to. Side note: if the upload was successful, it gets redirected to a page you specify (at least there’s some control here).

Progress bar reusable code is scare. There’s tons of code out there to do this, however, I could not find one that could cross-domain post. With traditional AJAX, you’re only allowed to do a POST/GET if the URL you’re using is the same domain as the caller page. One could get the code for a progress bar plugin (as there are tons out there) and rewrite it to do a POST and work with Amazon S3 – but that would take a considerate amount of work.

Lack of documentation. There’s not enough documentation for handling POST requests in the official Amazon Developer docs, which makes troubleshooting difficult. Doing POST submits is a relatively new feature of Amazon S3, compared to the rest of their APIs.

So the largest hurdle is to code functionality to get around the error page, since some JavaScript magic has to be put in place. That would be another day or so of work just for that, I believe. I already have some code in place that I put together while testing. If we left it as-is, when the user uploads, and if there was an error, the user would see something like this:

Which would, of course, be nonsensical. If the file was too large, they would see a message that the file was too large within the message tags. The user would then have to hit back to return to the form.

We can try another way, probably the easiest. When the user hits submits, it starts showing the animated spinner as it’s uploading. Also, we can tell the user that if he encounters an error page, just hit the back button. Also, keep in mind that there’ll be validation in place before the upload to check for file extension, at the very least. The only edge case to seeing that XML error message is if the file they submitted is over the limit *AND*  they have JavaScript turned off (that overrides the JavaScript file extension validation).

Creating a Basic HTML that POSTs a File to Amazon

Step 1 – Create a Bucket / Folder / Object:
The first thing we need to do a is create a Bucket on S3. To do this the easy way, go to the AWS: https://console.aws.amazon.com/s3/home  and create a bucket:

Buckets are where you store your objects (i.e. your files of any format). You can create a folder for further organization:

As you can see here, there are 4 folders here. We can double-click on step1_new_submissions folder and see the objects that are contained within this:

You can right-click on one of those objects (files) and click “Properties”:

Notice that a Properties panel will expand below. To the right, you have three tabs: Details, Permissions, Metadata.

If you click on the Permissions Tab you’ll notice that by default the file that was selected has following permissions set by user USERNAME:

Go back to the details tab and click on the Link:

You’ll notice that you’ll be taken to an XML document in your browser that has the following:

It’s because you have not let it public access. To give it public access, you click on the Permissions tab again, and click “Add more permissions” , set the Grantee to “Everyone” and choose Open/Download and Save.

You can also set the make multiple objects public. Select an object, hold the SHIFT key, then select the other object to select the objects in between. Select “Make Public”:

You can also upload an object manually via the “Upload” button:

Then click “Add more files”

As the file starts uploading, you’ll see the bottom panel show details about the file transfer:

Step 2: Setting up the HTML Form

The S3 REST API is very flexible, as long as you execute the proper method from your application, while at the same time sending the file over from your server to S3 (via a REST method with the correct URI). Traditionally, it would look like this:

Notice how there’s a middle-man that serves as the captor of the data submitted by the form and the file. Then, it sends it long to S3. The middle-man here is crucial. Double the total bandwidth is spent here – the bandwidth to go from the user’s machine to the web server (in this case CF), and then the bandwidth spent transferring the file to S3.

The advantage to this layout is that because the web server acts as a middle-man server, it can modify the data, change its filename, and slice-and-dice anything within the file because the file submitted has to go through it first. Once the middle-man is done, then it sends it to the S3. Drawback is that there’s wasted resources from the middle-man, not to mention there may be limitations on the middle-man to handle large files > 1GB .

As a solution, S3 has a POST method solution where you can POST the file directly to S3:

Setting up the Form tag

Let’s see how we can cross-domain (a domain other than ours) to S3. Rather than doing the following (to post to the same domain):

<form action=”#CGI.SCRIPT_NAME#” method=”post” enctype=”multipart/form-data”>

We do the following:

<form action=”http://s3.amazonaws.com/PastaVideos” method=”post” enctype=”multipart/form-data”>

Where “PastaVideos” is the name of the bucket.

The format of the object URI is as follows:

Step 3: Setting up the Other Form Fields

This is where things get interesting. In order to set up an HTML form that can upload straight to S3, there’s a set of required input fields in the form. They are as follows:

Tag
Type
Name
Value

input
hidden
key
The location of the object you’ll be uploading. You can consider it as the concatenation of the folder(s) and the filename (don’t use the bucket name):

step1_new_submissions/4059C3_${filename}

The ${filename} is the original filename of the file the user is uploading. The “4059C3_” is a made up ID that is concatenated to the ${filename} that will add uniqueness to the objects in the bucket, if multiple people are uploading to it.

input
hidden
acl
The access control list. Can be set to:

private – Lets the public user upload a file, but not be able to access it once he uploads. To make it accessible, one has to go into the AWS console and change the rights.

public-read – Lets the public user see it after he has uploaded it or anyone else see it.

input
hidden
AWSAccessKeyId
To get this key id, you need to access it by going to:

Then security credentials:

Get the Access Key ID. This id is also called the AWSAccessKeyId.

You should also grab the “Secret Access Key” by clicking on the “Show” on the adjacent column:

Keep the Secret Access Key private! Only the Access Key ID can be made public.

input
hidden
policy
Policy is a Base64 encoded JSON document that outlines the privileges and details of the files being uploaded. More details about this in the next section.

input
hidden
signature
The signature is the policy, HMAC-encrypted using the Secret Access Key.

input
hidden
content-type
The content type is the what kind of mime content the file that will pass through the form will be.

input
hidden
success_action_redirect
This is the URL of where to go when the upload succeeds. It could be any URL.  Also, when redirected, it will add the three additional URL variables:

bucket=PastaVideos

key=step1_new_submissions%2A192DCAE1-D625-0944-9FBCCD5C6CCB161B_ajax5-loader.mov

etag=%22356060aa56ce8955d38ed8c58661497a%22

Optional Form Fields

IMPORTANT: If you add any other additional form fields, it will throw an error. If there is in fact a need to add extra form fields, which will be pasted to another server, then you must append the prefix “x-ignore-“. Let’s say for example I have three input fields I want S3 to ignore, then do as follows:

<input type=”text” name=”x-ignore-lastname” tabindex=”2″ class=”textfield”>

<input type=”text” name=”x-ignore-address1″ tabindex=”3″ class=”textfield”>

<input type=”text” name=”x-ignore-address2″ tabindex=”4″ class=”textfield”>

<input type=”text” name=”x-ignore-city” tabindex=”5″ class=”textfield”>

This is completely legal and will not throw errors.

Grabbing x-ignore- fields in ColdFusion

If you want to grab these form variables via ColdFusion, do something like

Form.x-ignore-lastname

Will not suffice because of the dashes. You’ll have use the bracket/quotes format:

Form[“x-ignore-lastname”]

to grab them.

Also to check for existence or set a default value,

<cfparam name=”form.x-ignore-lastname” default=”parker” />

Or

<cfparam name=”form[“x-ignore-lastname” default=”parker” />

will not work.

You’ll have to use StructKeyexists( Form, “x-ignore-termsagree” ) to check for existence.

HTML Form Example

Putting all variables together from the previous table, we get something something like as follows:

<input type=”hidden” name=”key” value=”step1_new_submissions/9AAAAAAA-D633-0944-9FBCCCCC6CFB161B_${filename}” />

<input type=”hidden” name=”acl” value=”private” />

<input type=”hidden” name=”AWSAccessKeyId” value=”0N16468ABC47JDAQ2902″ />

<input type=”hidden” name=”policy” value=”eyJleHBpcmF0aW9uIjogIjIwMTgtMTAtMjFUMDA6MDA6MDBaIiwKICAiY29uZGl0aW9ucyI6IFsgCiAgICB7ImJ1Y2tldCI6lyZWN0IjogImh0dHA6Ly90ZXN0LXBhc3RhdmlkZW9zLm1pbGxlbm5pdW13ZWIuY29tL3RoYW5rcy5jZm0ifQogIF0KfQ==” />

<input type=”hidden” name=”signature” value=”2AAAA/BhWMg4CCCCC32fzQ=” />

<input type=”hidden” name=”content-type” value=”video/mov” />

<input type=”hidden” name=”success_action_redirect” value=”http://XYZ.com/thanks.cfm” />

<!— Ignore All This Stuff… —>
<input type=”text” id=”x-ignore-firstname” name=”x-ignore-firstname” value=”peter” />

<input type=”text” name=”x-ignore-lastname” value=”parker” />

Using Amazon’s HTML POST Form Generator

Because setting up the above HTML for the form could be tricky, Amazon has a tool that easily generates the HTML for the above code.

The following is a screenshot of the tool. The form can be found at: http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html

So the first thing you do is:

1.       Fill out the IDs:

Where AWS ID = Access Key ID   and AWS Key = Secret Access Key

2.       The next thing we’ll do is fill in the POST URL:

3.       The third step is the trickiest step. This is a JSON document that must adhere to the JSON spec. By default, there’s already a default boilerplate JSON document there. Let’s analyze what it means:

Let’s use one a real one from the test-pastavideos.XYZweb.com page:

You’ll notice that it has content-length-range, which checks the max size, in this case being 1 GB, and it also redirects to the index.cfm page when successful.

After you copy and paste that JSON policy, press “Add Policy”. Notice how the fields in the section “Sample Form Based on the Above Fields” has been populated.

4.       The fields may look something like this:

5.       Now click on Generate HTML:

Note that with the HTML code above, whatever the user uploads will be renamed to “testfile.txt” on S3. To retain the user’s filename, you have to switch it to value=”${filename}”

6.       You then copy that generated HTML and paste it into your page. Add any necessary optional fields with the x-ignore-prefixes.

That should give you a basic template for uploading to S3.

NOTE: You cannot change the values of any <input> fields except except the key. More about this in the next section.

Assigning the a unique ID to the object

Keep in mind of these items when assigning a unique filename:

You cannot change the user’s filename in JavaScript – once the user selects a file from his computer, you cannot append a GUID because JavaScript will not let you set the value of a file textbox, you can only read.

You cannot change append the GUID prefix after the form is submitted, because the filename will go to S3’s server, and there’s no way to run conditional logic once it’s on S3.

To get around these limitations, you generate the GUID or rather a ColdFusion UUID, then append it to your filename. Let’s take a look at an example of the pasta video form:

First let’s show the JSON policy document:

{“expiration”: “2018-10-21T00:00:00Z”,

“conditions”: [

{“bucket”: “PastaVideos”},

[“starts-with”, “$key”, “step1_new_submissions/”],

{“acl”: “private”},

[“starts-with”, “$Content-Type”, “video/mov”],

[“content-length-range”, 0, 10737774],

{“success_action_redirect”: “http://pastavideos.XYZ.com/index.cfm”}

]

}

And now the HTML / ColdFusion code:

<cfset Data.videoUUID  = CreateUUID() />

<form id=”videoform” action=”http://www.XYZ.com/index.cfm” method=”get” enctype=”multipart/form-data”>

<!— Start of Amazon S3 specific variables. —>

<input type=”hidden” name=”key”

value=”#Data.key## Data.videoUUIDXX#_${filename}” />

<input type=”hidden” name=”acl” value=”#Data.acl#” />

In the code above, when the HTML is rendered, it will already have filename S3 will use when it’s finished uploading. Remember, this is not the value of the <input type=”file” /> box.  So the user uploads, it will look like this on the AWS Console:

Now why is the action set to http://www.XYZ.com/index.cfm and method set to get? The HTML has this set, but when the page is loaded, JavaScript immediately runs and changes action to http://s3.amazonaws.com/PastaVideos and method to post:

<cfoutput>

<!— Only change the form variables if JavaScript is turned on.  —>

$( “##videoform” ).attr( “action”, “#Variables.postURL#” );

$( “##videoform” ).attr( “method”, “post” );

</cfoutput>

This is so that if JavaScript is turned off, it doesn’t POST to S3.

Other Sources of Information

Amazon S3 POST Example in HTML

http://aws.amazon.com/code/Amazon%20S3/1093?_encoding=UTF8&jiveRedirect=1

AWS

http://aws.amazon.com/developertools/

Helpful Resources

http://wiki.smartfrog.org/wiki/display/sf/Amazon+S3

http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?AccessPolicyLanguage_UseCases_s3_a.html

http://docs.amazonwebservices.com/AmazonS3/latest/gsg/

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingHTTPPOST.html

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?HTTPPOSTExamples.html

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?Introduction.html#S3_ACLs

Test Form

http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html

Documentation Home Page

http://aws.amazon.com/documentation/s3/

Workflow using WS3 policies

http://docs.amazonwebservices.com/AmazonS3/latest/dev/

Helpful Posts

http://aws.amazon.com/articles/1434?_encoding=UTF8&jiveRedirect=1

https://forums.aws.amazon.com/message.jspa?messageID=89017

https://forums.aws.amazon.com/message.jspa?messageID=88188

Assigning Handlers on Load

Rather than assigning event handlers to elements inline from within HTML, like so:

<a id="site" href="http://www.shinylight.com" onmouseover="Show()">Go to my site!</a>

You can assign the event handler programmatically via the onload event of the window object. Like so, in the script element:

window.onload = function() 
{
  document.getElementById("linkcontent").onmouseover = Show;   
}
 
function Show( )
{  
  alert( "Hey there, how are you?" + this.href );
}

Here’s the HTML:

<body>
 
<div id="content">
  <a id="linkcontent" href="http://www.shinylight.com?var=yes">This is pretty cool</a>
</div>  
 
</body>
</html>

This is just a great way to keep HTML code away from JavaScript. In this case, the handler for window.load will run when all web page resources have finished loading (the full HTML and JavaScript files).