<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Achen4</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Achen4"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Achen4"/>
	<updated>2026-05-10T18:17:10Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015_M1502_WSRA&amp;diff=97098</id>
		<title>CSC/ECE 517 Spring 2015 M1502 WSRA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015_M1502_WSRA&amp;diff=97098"/>
		<updated>2015-05-04T23:14:26Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Implement Rust Websocket==&lt;br /&gt;
This project concentrates on implementing Rust WebSocket API for Mozilla's web browser engine, Servo&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
Servo&amp;lt;ref&amp;gt; https://github.com/servo/servo &amp;lt;/ref&amp;gt; is a Web Browser engine written in [https://github.com/rust-lang/rust Rust]. It is an experimental project build that targets new generation of hardware: mobile devices, multi-core processors and high-performance GPUs to obtain power efficiency and maximum parallelism. Implementing WebSocket API for the Servo engine would allow a persistent single TCP socket connection to be established between the client and server that will provide bi-directional, full duplex, messages to be instantly exchanged with little overhead resulting in a very low latency connection and supporting interactive, dynamic applications.&lt;br /&gt;
&lt;br /&gt;
===Rust===&lt;br /&gt;
Rust &amp;lt;ref&amp;gt; https://github.com/rust-lang/rust &amp;lt;/ref&amp;gt; is a Systems programming language built in Rust itself that is fast, memory safe and multithreaded, but does not employ a garbage collector or otherwise impose significant runtime overhead. Rust is able to provide both control over hardware and safety which is not the case with other programming languages like C, C++, Python that provide only either control or safety but not both. &lt;br /&gt;
&lt;br /&gt;
===WebSocket===&lt;br /&gt;
WebSockets is a protocol that provides [http://en.wikipedia.org/wiki/Duplex_(telecommunications)#FULL-DUPLEX full-duplex] channel for a TCP connection and makes it possible to open an interactive communication session between the user's browser and a server. With WebSockets, you can send messages to a server and receive event-driven responses without having to request the server for a reply. The [http://dev.w3.org/html5/websockets/ WebSocket] specification defines an API to establish a &amp;quot;socket&amp;quot; connection between a web browser and a server. This establishment involves a handshake following which there is a persistent connection between the client and the server and both parties can start sending data asynchronously.&lt;br /&gt;
&lt;br /&gt;
===Cargo and Crate===&lt;br /&gt;
Cargo &amp;lt;ref&amp;gt;http://doc.crates.io/guide.html&amp;lt;/ref&amp;gt; is a application level package manager that allows Rust projects to declare their various dependencies. Cargo resembles the Bundler in Rails that is used to run Rails app, install required Gems mentioned in the Gemfile. Gemfile correspond to &amp;lt;code&amp;gt; Cargo.toml &amp;lt;/code&amp;gt; file and Gem correspond to &amp;lt;code&amp;gt; Crates &amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cargo introduces two metadata files with various bits of project information, fetches and builds project's dependencies, invokes rustc or another build tool with the correct parameters to build the project.&lt;br /&gt;
&lt;br /&gt;
==Project Description==&lt;br /&gt;
&lt;br /&gt;
The goal of this project is to implement WebSocket functionality in to Servo using the Rust WebSocket project. Once this is completed, tests following the HTML spec [https://html.spec.whatwg.org/multipage/comms.html#network] for websockets should pass, as the same spec will be driving development. The project will be completed when the following conditions are true, and all tests have passed. &lt;br /&gt;
* The client can establish a websocket connection&lt;br /&gt;
* The client can send objects to the server through the websocket&lt;br /&gt;
* The client can receive objects from the server through the websocket&lt;br /&gt;
* The client can successfully close the Websocket&lt;br /&gt;
&lt;br /&gt;
If Servo, as the client, is able to successfully perform all of these tasks, then our project will be completed.&lt;br /&gt;
&lt;br /&gt;
==Requirement Analysis==&lt;br /&gt;
The requirements for the implementation is well detailed in the HTML spec shown here [https://html.spec.whatwg.org/multipage/comms.html#feedback-from-the-protocol]. The majority of the requirements have to do with the feedback for the protocol. The entire reading can be seen there. &lt;br /&gt;
&lt;br /&gt;
Outside of the HTML spec, a couple other requirements are necessary. First, Servo must successfully compile with the websocket components included. There are a number of warnings currently with compiling Servo not related to Websocket, but no new ones should be introduced. &lt;br /&gt;
&lt;br /&gt;
Second, we should pass a set of tests already included in Servo's tests/wpt/web-platform-tests/websockets directory. These tests determine if the current websocket implementation fulfills all the requirements for Servo's use. &lt;br /&gt;
&lt;br /&gt;
Finally, we should successfully issue pull requests for each feature as it is completed.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The full implementation specs are outlined here: https://github.com/servo/servo/wiki/WebSocket-student-project&lt;br /&gt;
&lt;br /&gt;
In general the websocket has 3 functions:&lt;br /&gt;
*Open() - Initiates and connects the websocket to the server. Once the connection is open, hands receives.&lt;br /&gt;
*Send() - Sends application data to the server&lt;br /&gt;
*Close() - Closes the websocket connection&lt;br /&gt;
&lt;br /&gt;
In addition the websocket has the following attributes:&lt;br /&gt;
*readyState - indicates the current state of the websocket. The websocket can be in one of the five states below:&lt;br /&gt;
**Connecting - an initial connection handshake has been started but the server has not responded to the handshake yet&lt;br /&gt;
**Open - the connection has been established (server has responded to the initial connection handshake)&lt;br /&gt;
**Closing - an initial close connection handshake has been started but the server has not responded to the handshake yet&lt;br /&gt;
**Closed - the connection has been closed (server has acknowledged the closing handshake and responded)&lt;br /&gt;
*Extensions - Extensions are being used by the websocket (see full specs link above)&lt;br /&gt;
*Protocol - Protocols being used by the websocket (see full specs link above)&lt;br /&gt;
*bufferedAmount - returns how many bytes of application data has been buffered in the websocket thus far (unsent data)&lt;br /&gt;
&lt;br /&gt;
==Design Patterns==&lt;br /&gt;
&amp;lt;b&amp;gt;[http://en.wikipedia.org/wiki/Factory_%28object-oriented_programming%29 Factory Pattern]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
For every web application application running on the web browser that requires a websocket for communicating with the server, a new thread needs to be spawned that creates a client end of the socket associated with the web app. Spawning this thread can be considered to be similar to instantiating an object for which Factory Pattern can be employed. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;[http://en.wikipedia.org/wiki/Thread_pool_pattern Thread Pool Pattern]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Servo's primary objective was to provide concurrency. This concurrency can be attained by scheduling tasks which can be run in parallel. Threads help execute these tasks in parallel. Thread Pool Pattern helps manage these thread. For example, one thread could be to accept new requests for websockets, as a result of which new thread would be spawned (client side of websocket) for that application.&lt;br /&gt;
&lt;br /&gt;
==Proposed Test Cases==&lt;br /&gt;
&lt;br /&gt;
The test cases for websocket are already included in servo, under the Servo directory tests/wpt/web-platform-tests/websockets. To break down how each should be tested, here are the proposed scenarios:&lt;br /&gt;
* The send function should send some object, then wait for confirmation of sending.&lt;br /&gt;
* The constructor should make a websocket connection to the URL and either open the connection or close it based on the response coming in. &lt;br /&gt;
* The receive function should trigger on receiving an object, then send confirmation. &lt;br /&gt;
&lt;br /&gt;
All of these are tested thoroughly in the directory given at the beginning of this section.&lt;br /&gt;
&lt;br /&gt;
==Further Readings==&lt;br /&gt;
&amp;lt;b&amp;gt;[https://html.spec.whatwg.org/multipage/comms.html#network HTML Spec for WebSockets]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;[https://github.com/servo/servo/wiki/WebSocket-student-project Mozilla Websocket Project Page]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;[https://github.com/servo/servo Servo github]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;[http://cyderize.github.io/rust-websocket/ Rust Websocket]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015_M1502_WSRA&amp;diff=96422</id>
		<title>CSC/ECE 517 Spring 2015 M1502 WSRA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015_M1502_WSRA&amp;diff=96422"/>
		<updated>2015-04-01T23:57:00Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Implement Rust Websocket==&lt;br /&gt;
This project concentrates on implementing Rust WebSocket API for Mozilla's web browser engine, Servo&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
Servo&amp;lt;ref&amp;gt; https://github.com/servo/servo &amp;lt;/ref&amp;gt; is a Web Browser engine written in [https://github.com/rust-lang/rust Rust]. It is an experimental project build that targets new generation of hardware: mobile devices, multi-core processors and high-performance GPUs to obtain power efficiency and maximum parallelism. Implementing WebSocket API for the Servo engine would allow a persistent single TCP socket connection to be established between the client and server that will provide bi-directional, full duplex, messages to be instantly exchanged with little overhead resulting in a very low latency connection and supporting interactive, dynamic applications.&lt;br /&gt;
&lt;br /&gt;
===Rust===&lt;br /&gt;
Rust &amp;lt;ref&amp;gt; https://github.com/rust-lang/rust &amp;lt;/ref&amp;gt; is a Systems programming language built in Rust itself that is fast, memory safe and multithreaded, but does not employ a garbage collector or otherwise impose significant runtime overhead. Rust is able to provide both control over hardware and safety which is not the case with other programming languages like C, C++, Python that provide only either control or safety but not both. &lt;br /&gt;
&lt;br /&gt;
===WebSocket===&lt;br /&gt;
WebSockets is a protocol that provides [http://en.wikipedia.org/wiki/Duplex_(telecommunications)#FULL-DUPLEX full-duplex] channel for a TCP connection and makes it possible to open an interactive communication session between the user's browser and a server. With WebSockets, you can send messages to a server and receive event-driven responses without having to request the server for a reply. The [http://dev.w3.org/html5/websockets/ WebSocket] specification defines an API to establish a &amp;quot;socket&amp;quot; connection between a web browser and a server. This establishment involves a handshake following which there is a persistent connection between the client and the server and both parties can start sending data asynchronously.&lt;br /&gt;
&lt;br /&gt;
===Cargo and Crate===&lt;br /&gt;
Cargo &amp;lt;ref&amp;gt;http://doc.crates.io/guide.html&amp;lt;/ref&amp;gt; is a application level package manager that allows Rust projects to declare their various dependencies. Cargo resembles the Bundler in Rails that is used to run Rails app, install required Gems mentioned in the Gemfile. Gemfile correspond to &amp;lt;code&amp;gt; Cargo.toml &amp;lt;/code&amp;gt; file and Gem correspond to &amp;lt;code&amp;gt; Crates &amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cargo introduces two metadata files with various bits of project information, fetches and builds project's dependencies, invokes rustc or another build tool with the correct parameters to build the project.&lt;br /&gt;
&lt;br /&gt;
==Project Description==&lt;br /&gt;
==Requirement Analysis==&lt;br /&gt;
==Implementation==&lt;br /&gt;
'''When the WebSocket connection is established, the user agent must queue a task to run these steps:'''&lt;br /&gt;
*If the WebSocket object's client-specified protocols was not an empty list, but the subprotocol in use is the null value, then fail the WebSocket connection, set the readyState attribute's value to CLOSING, and abort these steps.&lt;br /&gt;
*Change the readyState attribute's value to OPEN.&lt;br /&gt;
*Change the extensions attribute's value to the extensions in use, if is not the null value.&lt;br /&gt;
*Change the protocol attribute's value to the subprotocol in use, if is not the null value.&lt;br /&gt;
*Act as if the user agent had received a set-cookie-string consisting of the cookies set during the server's opening handshake, for the URL url given to the WebSocket() constructor.&lt;br /&gt;
*Fire a simple event named open at the WebSocket object.&lt;br /&gt;
&lt;br /&gt;
'''When a WebSocket message has been received with type type and data data, the user agent must queue a task to follow these steps:'''&lt;br /&gt;
*If the readyState attribute's value is not OPEN, then abort these steps.&lt;br /&gt;
*Let event be a newly created trusted event that uses the MessageEvent interface, with the event type message, which does not bubble, is not cancelable, and has no default action.&lt;br /&gt;
*Initialise event's origin attribute to the Unicode serialisation of the origin of the URL that was passed to the WebSocket object's constructor.&lt;br /&gt;
*If type indicates that the data is Text, then initialise event's data attribute to data.&lt;br /&gt;
*If type indicates that the data is Binary, and binaryType is set to &amp;quot;blob&amp;quot;, then initialise event's data attribute to a new Blob object that represents data as its raw data.&lt;br /&gt;
*If type indicates that the data is Binary, and binaryType is set to &amp;quot;arraybuffer&amp;quot;, then initialise event's data attribute to a new ArrayBuffer object whose contents are data.&lt;br /&gt;
*Dispatch event at the WebSocket object.&lt;br /&gt;
&lt;br /&gt;
==Architecture==&lt;br /&gt;
==Component Design==&lt;br /&gt;
==Data Design==&lt;br /&gt;
==Design Patterns==&lt;br /&gt;
&amp;lt;b&amp;gt;[http://en.wikipedia.org/wiki/Factory_%28object-oriented_programming%29 Factory Pattern]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
For every web application application running on the web browser that requires a websocket for communicating with the server, a new thread needs to be spawned that creates a client end of the socket associated with the web app. Spawning this thread can be considered to be similar to instantiating an object for which Factory Pattern can be employed. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;[http://en.wikipedia.org/wiki/Thread_pool_pattern Thread Pool Pattern]&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Servo's primary objective was to provide concurrency. This concurrency can be attained by scheduling tasks which can be run in parallel. Threads help execute these tasks in parallel. Thread Pool Pattern helps manage these thread. For example, one thread could be to accept new requests for websockets, as a result of which new thread would be spawned (client side of websocket) for that application.&lt;br /&gt;
&lt;br /&gt;
==Proposed Test Cases==&lt;br /&gt;
==Further Readings==&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95796</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95796"/>
		<updated>2015-03-23T21:38:27Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search.&lt;br /&gt;
The implementation of the search_gis_locations() function can be broken down into two sections: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1077/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
'''The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.'''&lt;br /&gt;
&lt;br /&gt;
This code gets the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next the code performs the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To parallel the level codes returned from geonames.org, the internal database translates the level field from the gis_location table into equivalent codes from geonames.org. See http://www.geonames.org/export/codes.html for a complete list of feature codes.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lastly, we gather the id, fcode, name, lat, and lng fields in a hash and append them to the results variable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;       &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===JSONP Formatted Response===&lt;br /&gt;
The following code shows the format of the final response. The return value is a hash that contains 2 fields:&lt;br /&gt;
*gislocations - a hash of the search results. The fields of this hash must be defined in the GeoNamesSearchCombo.js (see lines 252-268 of the pull request for an example).&lt;br /&gt;
*totalResultsCount - a integer count of how many results are in the gislocations hash&lt;br /&gt;
&lt;br /&gt;
Also note that unlike JSON, JSONP requires the return value to be wrapped inside a callback function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    returnVal = {}&lt;br /&gt;
    returnVal[&amp;quot;gislocations&amp;quot;] = results&lt;br /&gt;
    returnVal[&amp;quot;totalResultsCount&amp;quot;] = count&lt;br /&gt;
    &lt;br /&gt;
    #Autocomplete caller expects JSONP response. Callback wrapper.&lt;br /&gt;
    return callback_func+'('+json.dumps(returnVal)+')'&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95287</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95287"/>
		<updated>2015-03-21T23:10:16Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* JSONP Formatted Response */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search.&lt;br /&gt;
The implementation of the search_gis_locations() function can be broken down into two sections: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
'''The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.'''&lt;br /&gt;
&lt;br /&gt;
This code gets the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next the code performs the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To parallel the level codes returned from geonames.org, the internal database translates the level field from the gis_location table into equivalent codes from geonames.org. See http://www.geonames.org/export/codes.html for a complete list of feature codes.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lastly, we gather the id, fcode, name, lat, and lng fields in a hash and append them to the results variable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;       &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===JSONP Formatted Response===&lt;br /&gt;
The following code shows the format of the final response. The return value is a hash that contains 2 fields:&lt;br /&gt;
*gislocations - a hash of the search results. The fields of this hash must be defined in the GeoNamesSearchCombo.js (see lines 252-268 of the pull request for an example).&lt;br /&gt;
*totalResultsCount - a integer count of how many results are in the gislocations hash&lt;br /&gt;
&lt;br /&gt;
Also note that unlike JSON, JSONP requires the return value to be wrapped inside a callback function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    returnVal = {}&lt;br /&gt;
    returnVal[&amp;quot;gislocations&amp;quot;] = results&lt;br /&gt;
    returnVal[&amp;quot;totalResultsCount&amp;quot;] = count&lt;br /&gt;
    &lt;br /&gt;
    #Autocomplete caller expects JSONP response. Callback wrapper.&lt;br /&gt;
    return callback_func+'('+json.dumps(returnVal)+')'&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95286</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95286"/>
		<updated>2015-03-21T23:03:49Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* JSONP Formatted Response */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search.&lt;br /&gt;
The implementation of the search_gis_locations() function can be broken down into two sections: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
'''The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.'''&lt;br /&gt;
&lt;br /&gt;
This code gets the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next the code performs the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To parallel the level codes returned from geonames.org, the internal database translates the level field from the gis_location table into equivalent codes from geonames.org. See http://www.geonames.org/export/codes.html for a complete list of feature codes.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lastly, we gather the id, fcode, name, lat, and lng fields in a hash and append them to the results variable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;       &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===JSONP Formatted Response===&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95285</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95285"/>
		<updated>2015-03-21T23:03:39Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search.&lt;br /&gt;
The implementation of the search_gis_locations() function can be broken down into two sections: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
'''The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.'''&lt;br /&gt;
&lt;br /&gt;
This code gets the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next the code performs the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To parallel the level codes returned from geonames.org, the internal database translates the level field from the gis_location table into equivalent codes from geonames.org. See http://www.geonames.org/export/codes.html for a complete list of feature codes.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lastly, we gather the id, fcode, name, lat, and lng fields in a hash and append them to the results variable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;       &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==JSONP Formatted Response==&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95284</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95284"/>
		<updated>2015-03-21T23:02:59Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Searching the Internal database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search.&lt;br /&gt;
The implementation of the search_gis_locations() function can be broken down into two sections: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
'''The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.'''&lt;br /&gt;
&lt;br /&gt;
This code gets the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next the code performs the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To parallel the level codes returned from geonames.org, the internal database translates the level field from the gis_location table into equivalent codes from geonames.org. See http://www.geonames.org/export/codes.html for a complete list of feature codes.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lastly, we gather the id, fcode, name, lat, and lng fields in a hash and append them to the results variable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;       &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95283</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95283"/>
		<updated>2015-03-21T23:00:21Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Searching the Internal database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search.&lt;br /&gt;
The implementation of the search_gis_locations() function can be broken down into two sections: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
'''The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.'''&lt;br /&gt;
&lt;br /&gt;
This code gets the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next the code performs the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To parallel the level codes returned from geonames.org, the internal database translates the level field from the gis_location table into equivalent codes from geonames.org. See http://www.geonames.org/export/codes.html for a complete list of feature codes.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lastly, we gather the id, fcode, name, lat, and lng fields and add them to results&lt;br /&gt;
&amp;lt;pre&amp;gt;       &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95282</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95282"/>
		<updated>2015-03-21T22:56:06Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search.&lt;br /&gt;
The implementation of the search_gis_locations() function can be broken down into two sections: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95281</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95281"/>
		<updated>2015-03-21T22:53:48Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Goals of the project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
*Extend upon the Geonames searchCombo feature embedded in Sahana map module&lt;br /&gt;
*Search the internal gis_locations table for a partial match of the entered string in the search field&lt;br /&gt;
*Partial matches should populate in the autocomplete drop down&lt;br /&gt;
*When user selects a result from the drop down, zoom to that location&lt;br /&gt;
*Default to geonames.org when internal search returns no results&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search. The implementation of the search_gis_locations() function can be broken down into two section: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95280</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95280"/>
		<updated>2015-03-21T22:51:27Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Goals of the project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
The focus of this project is to extend upon the Geonames searchCombo feature embedded in Sahana map module. Specifically the location linked to the search term in gis_location table should be zoomed to and if the search in internal database returns no results, the search redirects to Geonames.org&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search. The implementation of the search_gis_locations() function can be broken down into two section: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95278</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95278"/>
		<updated>2015-03-21T22:48:26Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search. The implementation of the search_gis_locations() function can be broken down into two section: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95277</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95277"/>
		<updated>2015-03-21T22:48:11Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
==Implementation==&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search. The implementation of the search_gis_locations() function can be broken down into two section: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95276</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95276"/>
		<updated>2015-03-21T22:47:40Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
==Implementation==&lt;br /&gt;
'''Notes:'''&lt;br /&gt;
*Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
*For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search. The implementation of the search_gis_locations() function can be broken down into two section: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95275</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95275"/>
		<updated>2015-03-21T22:46:35Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
==Implementation==&lt;br /&gt;
Any groups wishing to extend, understand, or review the zoom to location feature enhancement should see this github pull request: https://github.com/flavour/eden/pull/1075/files&lt;br /&gt;
For tracking purposes, the dialog for this project can be found on this google groups thread: https://groups.google.com/forum/#!topic/sahana-eden/vS54iJEDqvQ&lt;br /&gt;
&lt;br /&gt;
The implementation of the zoom to location feature is contained entirely in the search_gis_locations() function in controllers/gis.py and static/scripts/gis/GeoExt/ux/GeoNamesSearchCombo.js. The only significant change to the GeoNamesSearchCombo.js was the url option which makes the search box call the search_gis_locations() method instead of querying the geonames.org website for a search location (see lines 220-221 on the pull request page).&lt;br /&gt;
&lt;br /&gt;
The search_gis_locations() function primarily uses an adapter design. It allows the GeoNamesSearchCombo.js to query the internal database and geonames.org for data during a user search. The implementation of the search_gis_locations() function can be broken down into two section: Searching the Internal database, and Searching Geonames.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Searching the Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching Geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95186</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95186"/>
		<updated>2015-03-21T21:25:45Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Searching Internal database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
[http://eden.sahanafoundation.org/ Sahana Eden] is an Open Source Humanitarian Platform which can be used to provide solutions for Disaster Management, Development, and Environmental Management sectors. Being open source, it is easily customisable, extensible and free. It is supported by the [http://sahanafoundation.org/ Sahana Software Foundation]. Sahana Eden was first developed in Sri Lanka as a response to the Indian Ocean Tsunami in 2005. The code for the Sahana Eden project is hosted at [https://github.com/flavour/eden Github] and it is published under the [http://en.wikipedia.org/wiki/MIT_License MIT License]. The demo version of Sahana Eden can be found [http://demo.eden.sahanafoundation.org/eden/ here].&lt;br /&gt;
&lt;br /&gt;
Eden is a flexible humanitarian platform with a rich feature set which can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis. Sahana Eden contains a number of different modules like 'Organization Registry', 'Project Tracking', 'Human Resources', 'Inventory', 'Assets', 'Assessments', 'Scenarios &amp;amp; Events', 'Mapping', 'Messaging' which can be configured to provide a wide range of functionality.  We are contributing to Sahana Eden as a part of our Object-Oriented Design and Development's Open-Source Software (OSS) Project. In this Wiki Page, we would be explaining the goals of our project and how we implemented them.&lt;br /&gt;
&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
==Implementation==&lt;br /&gt;
===Searching Internal database===&lt;br /&gt;
The following code searches the 'name' column in the gis_location database for items &amp;quot;starting with&amp;quot; the string entered in the search field of the map page on Eden.&lt;br /&gt;
&lt;br /&gt;
Here we are getting the parameters from the url. The user string is stored in a parameter called &amp;quot;name_startsWith&amp;quot;. Because eden expects a JSONP response, a callback function is needed. The name of the call back function is stored in the &amp;quot;callback_func&amp;quot; parameter of the url. We also select the &amp;quot;gis_location&amp;quot; table to get ready for the query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next we perform the query on the &amp;quot;name&amp;quot; column of the gis_location table. Note that the &amp;quot;%&amp;quot; is a wildcard and allows us to do a partial match with the string the user entered into the search field. The rows variable selects the fields of our interest. For the search, we only care about the entry id, zoom level (aka. level), name, latitude, and longitude.&lt;br /&gt;
&lt;br /&gt;
We also count and store the number of results from the search since it's a required field for the search box.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching geonames===&lt;br /&gt;
&lt;br /&gt;
If the initial search for the user query on the internal 'Locations' table, then a search is done on [http://www.geonames.org/ Geonames] as a fallback. This fallback search is implemented within the 'gis' controller using 'urllib2' an extensible Python module for opening URLs. The base URL for the Geonames search is http://ws.geonames.org/searchJSON?. The searchJSON action expects the Geonames username, the prefix of the location names to be searched, number of rows in the JSON result, etc. as parameters.  The following code performs the HTTP GET request and loads the JSON response in a dictionary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
username = settings.get_gis_geonames_username()&lt;br /&gt;
maxrows = &amp;quot;20&amp;quot;&lt;br /&gt;
lang = &amp;quot;en&amp;quot;&lt;br /&gt;
charset = &amp;quot;UTF8&amp;quot;&lt;br /&gt;
nameStartsWith = user_str&lt;br /&gt;
geonames_base_url = &amp;quot;http://ws.geonames.org/searchJSON?&amp;quot;&lt;br /&gt;
url = &amp;quot;%susername=%s&amp;amp;maxRows=%s&amp;amp;lang=%s&amp;amp;charset=%s&amp;amp;name_startsWith=%s&amp;quot; % (geonames_base_url,username,maxrows,lang,charset,nameStartsWith)&lt;br /&gt;
response = urllib2.urlopen(url)&lt;br /&gt;
dictResponse = json.loads(response.read().decode(response.info().getparam('charset') or 'utf-8'))&lt;br /&gt;
response.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The JSON object in the response has two keys 'totalResultsCount' and 'geonames'. The object corresponding to the 'geonames' key is an array of the Geonames search results, each of which is a dictionary in itself. Of all the Keys present in a single result dictionary, the relevant ones are 'id', 'fcode', 'name', 'lat' and 'lng'. The relevant keys are extracted and a new dictionary is created for each search result. The array of new dictionaries is then returned as a response. The following code performs the decoding of the JSON object, creating new dictionary objects with the relevant keys and then collecting them as a single array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
results = []&lt;br /&gt;
if dictResponse[&amp;quot;totalResultsCount&amp;quot;] != 0:&lt;br /&gt;
    geonamesResults = dictResponse[&amp;quot;geonames&amp;quot;]&lt;br /&gt;
    for geonamesResult in geonamesResults:&lt;br /&gt;
        result = {}&lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : int(geonamesResult[&amp;quot;geonameId&amp;quot;]), &amp;quot;fcode&amp;quot; : str(geonamesResult[&amp;quot;fcode&amp;quot;]),&lt;br /&gt;
                  &amp;quot;name&amp;quot; : str(geonamesResult[&amp;quot;name&amp;quot;]),&amp;quot;lat&amp;quot; : float(geonamesResult[&amp;quot;lat&amp;quot;]),&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : float(geonamesResult[&amp;quot;lng&amp;quot;])}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95101</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95101"/>
		<updated>2015-03-21T19:32:42Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Searching Internal database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
==Implementation==&lt;br /&gt;
===Searching Internal database===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    #Get vars from url&lt;br /&gt;
    user_str = get_vars[&amp;quot;name_startsWith&amp;quot;]&lt;br /&gt;
    callback_func = request.vars[&amp;quot;callback&amp;quot;]&lt;br /&gt;
    atable = db.gis_location&lt;br /&gt;
    query = atable.name.lower().like(user_str + '%')&lt;br /&gt;
    rows = db(query).select(atable.id,&lt;br /&gt;
                            atable.level,&lt;br /&gt;
                            atable.name,&lt;br /&gt;
                            atable.lat,&lt;br /&gt;
                           atable.lon&lt;br /&gt;
                            )&lt;br /&gt;
    results = []&lt;br /&gt;
    count = 0&lt;br /&gt;
    for row in rows:&lt;br /&gt;
        count += 1&lt;br /&gt;
        result = {}&lt;br /&gt;
         &lt;br /&gt;
        #Convert the level colum into the ADM codes geonames returns&lt;br /&gt;
        #fcode = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        level = row[&amp;quot;gis_location.level&amp;quot;]&lt;br /&gt;
        if level==&amp;quot;L0&amp;quot;: #Country&lt;br /&gt;
            fcode = &amp;quot;PCL&amp;quot; #Zoom 5&lt;br /&gt;
        elif level==&amp;quot;L1&amp;quot;: #State/Province&lt;br /&gt;
            fcode = &amp;quot;ADM1&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L2&amp;quot;: #County/District&lt;br /&gt;
            fcode = &amp;quot;ADM2&amp;quot;&lt;br /&gt;
        elif level==&amp;quot;L3&amp;quot;: #Village/Suburb&lt;br /&gt;
            fcode = &amp;quot;ADM3&amp;quot;&lt;br /&gt;
        else: #City/Town/Village&lt;br /&gt;
            fcode = &amp;quot;ADM4&amp;quot;&lt;br /&gt;
             &lt;br /&gt;
        result = {&amp;quot;id&amp;quot; : row[&amp;quot;gis_location.id&amp;quot;],&lt;br /&gt;
                  &amp;quot;fcode&amp;quot; : fcode,&lt;br /&gt;
                  &amp;quot;name&amp;quot; : row[&amp;quot;gis_location.name&amp;quot;],&lt;br /&gt;
                  &amp;quot;lat&amp;quot; : row[&amp;quot;gis_location.lat&amp;quot;],&lt;br /&gt;
                  &amp;quot;lng&amp;quot; : row[&amp;quot;gis_location.lon&amp;quot;]}&lt;br /&gt;
        results.append(result)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Searching geonames===&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95087</id>
		<title>CSC/ECE 517 Spring 2015/oss S1504 AAC</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/oss_S1504_AAC&amp;diff=95087"/>
		<updated>2015-03-21T19:17:37Z</updated>

		<summary type="html">&lt;p&gt;Achen4: Created page with &amp;quot;==About Sahana== ==Goals of the project== ==Implementation== ===Searching Internal database=== ===Searching geonames===&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==About Sahana==&lt;br /&gt;
==Goals of the project==&lt;br /&gt;
==Implementation==&lt;br /&gt;
===Searching Internal database===&lt;br /&gt;
===Searching geonames===&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015&amp;diff=95086</id>
		<title>CSC/ECE 517 Spring 2015</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015&amp;diff=95086"/>
		<updated>2015-03-21T19:16:23Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Writing Assignment 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Writing Assignment 1==&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 17 WL]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 5 ZX]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 6 TZ]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 4 RW]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 7 SA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 9 RA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 14 RI]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 1 DZ]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 20 HA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 3 RF]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 12 LS]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 13 MA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 2 WA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 21 QW]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 23 MS]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 10 GL]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 27 VC]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 22 SF]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 15 SH]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 18 AS]]&lt;br /&gt;
&lt;br /&gt;
==Writing Assignment 2==&lt;br /&gt;
*[[CSC/ECE 517 Fall 2014/oss E1502 wwj]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2014/oss E1508 MRS]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1504 IMV]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1505 xzl]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1509 lds]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1510 FLP]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1506 SYZ]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss S1504 AAC]]&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015&amp;diff=95085</id>
		<title>CSC/ECE 517 Spring 2015</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015&amp;diff=95085"/>
		<updated>2015-03-21T19:15:30Z</updated>

		<summary type="html">&lt;p&gt;Achen4: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Writing Assignment 1==&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 17 WL]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 5 ZX]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 6 TZ]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 4 RW]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 7 SA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 9 RA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 14 RI]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 1 DZ]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 20 HA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 3 RF]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 12 LS]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 13 MA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1a 2 WA]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 21 QW]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 23 MS]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 10 GL]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 27 VC]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 22 SF]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 15 SH]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/ch1b 18 AS]]&lt;br /&gt;
&lt;br /&gt;
==Writing Assignment 2==&lt;br /&gt;
*[[CSC/ECE 517 Fall 2014/oss E1502 wwj]]&lt;br /&gt;
*[[CSC/ECE 517 Fall 2014/oss E1508 MRS]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1504 IMV]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1505 xzl]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1509 lds]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1510 FLP]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss E1506 SYZ]]&lt;br /&gt;
*[[CSC/ECE 517 Spring 2015/oss S1504 - Sahana Zoom to location]]&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93743</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93743"/>
		<updated>2015-02-14T03:01:57Z</updated>

		<summary type="html">&lt;p&gt;Achen4: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;. While Amazon is a fast paced, growing cloud storage service, there exists many alternatives to Amazon S3. A brief overview of the cost to performance comparison is shown below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Alternatives to S3==&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png |frame |center|Source: http://www.cloudberrylab.com/blog/amazon-s3-azure-and-google-cloud-prices-compare/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &amp;lt;ref&amp;gt;[https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/  Barb Darrow. Amazon Web Services tops list of most reliable public clouds ]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png |frame |center|Source: https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/]]&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
===Objects===&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Buckets===&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Keys and Metadata===&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Regions===&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Access Permissions===&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Data Protection===&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Ruby and Amazon S3==&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the [http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release| AWS Ruby Development Blog].&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93742</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93742"/>
		<updated>2015-02-14T02:55:34Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;. While Amazon is a fast paced, growing cloud storage service, there exists many alternatives to Amazon S3. A brief overview of the cost to performance comparison is shown below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png |frame |center|Source: http://www.cloudberrylab.com/blog/amazon-s3-azure-and-google-cloud-prices-compare/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &amp;lt;ref&amp;gt;[https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/  Barb Darrow. Amazon Web Services tops list of most reliable public clouds ]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png |frame |center|Source: https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/]]&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
===Objects===&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Buckets===&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Keys and Metadata===&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Regions===&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Access Permissions===&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Data Protection===&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Ruby and Amazon S3==&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the [http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release| AWS Ruby Development Blog].&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93741</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93741"/>
		<updated>2015-02-14T02:51:17Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Alternatives to S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png |frame |center|Source: http://www.cloudberrylab.com/blog/amazon-s3-azure-and-google-cloud-prices-compare/]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &amp;lt;ref&amp;gt;[https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/  Barb Darrow. Amazon Web Services tops list of most reliable public clouds ]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png |frame |center|Source: https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/]]&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93740</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93740"/>
		<updated>2015-02-14T02:48:42Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Alternatives to S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png  |Source: http://www.cloudberrylab.com/blog/amazon-s3-azure-and-google-cloud-prices-compare/]]&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &amp;lt;ref&amp;gt;[https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/  Barb Darrow. Amazon Web Services tops list of most reliable public clouds ]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png |Source: https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/]]&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93739</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93739"/>
		<updated>2015-02-14T02:45:03Z</updated>

		<summary type="html">&lt;p&gt;Achen4: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &amp;lt;ref&amp;gt;[https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/  Barb Darrow. Amazon Web Services tops list of most reliable public clouds ]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93738</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93738"/>
		<updated>2015-02-14T02:43:16Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Alternatives to S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &amp;lt;ref&amp;gt; Barb Darrow. Amazon Web Services tops list of most reliable public clouds https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93737</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93737"/>
		<updated>2015-02-14T02:42:23Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Alternatives to S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &amp;lt;ref&amp;gt; Barb Darrow. Amazon Web Services tops list of most reliable public clouds [https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93736</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93736"/>
		<updated>2015-02-14T02:40:55Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Alternatives to S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists alternatives to AmazonS3:&lt;br /&gt;
:*[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
:*[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
As with any online cloud storage service, the cost of storage is always a concern. The following table shows a cost comparison between them:&lt;br /&gt;
&lt;br /&gt;
[[File:PriceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
While all three services are fairly comparable in price, there are some developer support considerations for selecting a service. For instance, if an app is focused around the [https://www.android.com/| Android OS], using the google cloud storage provides you with more tightly coupled APIs for your app. The overhead of development may be much less compared to using S3 to rails APIs for a Google app. Similarly, [http://www.windowsphone.com/en-us/features| Windows Phone OS] would provide more native support for Windows Phone apps and likewise kindle fire apps would have more native support on [https://developer.amazon.com/public/apis| Fire OS].&lt;br /&gt;
&lt;br /&gt;
The reliability of a cloud storage service like S3 plays a heavy role in the selection process. Less downtime translates to a more robust and reliable app. In 2014, Amazon S3 registered only 23 outages totaling 2.69 hours of downtime whereas Google registered 8 outages totaling 14.23 hours of downtime. Microsoft Azure totaled 141 outages summing 10.97 hours. &lt;br /&gt;
&lt;br /&gt;
[[File:ServiceCompare.png]]&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:ServiceCompare.png&amp;diff=93735</id>
		<title>File:ServiceCompare.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:ServiceCompare.png&amp;diff=93735"/>
		<updated>2015-02-14T02:40:16Z</updated>

		<summary type="html">&lt;p&gt;Achen4: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:PriceCompare.png&amp;diff=93730</id>
		<title>File:PriceCompare.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:PriceCompare.png&amp;diff=93730"/>
		<updated>2015-02-14T02:17:46Z</updated>

		<summary type="html">&lt;p&gt;Achen4: uploaded a new version of &amp;amp;quot;File:PriceCompare.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:PriceCompare.png&amp;diff=93727</id>
		<title>File:PriceCompare.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:PriceCompare.png&amp;diff=93727"/>
		<updated>2015-02-14T02:15:27Z</updated>

		<summary type="html">&lt;p&gt;Achen4: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93724</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93724"/>
		<updated>2015-02-14T02:08:49Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Alternatives to S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
There currently exists many alternatives to AmazonS3:&lt;br /&gt;
**[https://cloud.google.com/storage/| Google Cloud Storage]&lt;br /&gt;
**[http://www.rackspace.com/cloud/files| Rackspace Cloud Storage]&lt;br /&gt;
**[http://azure.microsoft.com/en-us/ |MSAzure]&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93722</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93722"/>
		<updated>2015-02-14T02:05:09Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Alternatives to S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Alternatives to S3===&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93721</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93721"/>
		<updated>2015-02-14T02:04:55Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Alternatives to S3==&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93720</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93720"/>
		<updated>2015-02-14T01:57:34Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Upload a file to Amazon S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following license &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93718</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93718"/>
		<updated>2015-02-14T01:57:08Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Upload a file to Amazon S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
This example is a combination of many examples shown above. A bucket is created and filled with a local file. The program also generates a url for the upload and allows the option of deleting the local copy of the object at the user's discretion. &lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93715</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93715"/>
		<updated>2015-02-14T01:47:18Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Download an object to a folder on S3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket my-new-bucket.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93714</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93714"/>
		<updated>2015-02-14T01:47:04Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Creating an object */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file hello.txt with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket &amp;quot;my-new-bucket&amp;quot;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93713</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93713"/>
		<updated>2015-02-14T01:46:40Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Download an object to a folder */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file &amp;quot;hello.txt&amp;quot; with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder on S3===&lt;br /&gt;
An app may choose to retrieve objects from a server and save these to the S3 server, where they can be accessed later. This example shows how to download the poetry.pdf object to the bucket &amp;quot;my-new-bucket&amp;quot;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93712</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93712"/>
		<updated>2015-02-14T01:43:02Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Generating object download urls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file &amp;quot;hello.txt&amp;quot; with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. In the example, the expiration of secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93711</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93711"/>
		<updated>2015-02-14T01:42:27Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Generating object download urls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file &amp;quot;hello.txt&amp;quot; with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
Generating download links for objects on the S3 server can make it easier for you to share your objects with other users. This example shows how to create a download link for the hello.txt object we created earlier.&lt;br /&gt;
Note that generating a link requires you to specify the bucket it resides in. Links can be public, as in the case of hello.txt below, or be set to expire on a timer through the expires_in symbol. Note that the expiration of the secret_plans.txt is 1 hour (60s * 60).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93708</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93708"/>
		<updated>2015-02-14T01:34:57Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file &amp;quot;hello.txt&amp;quot; with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93707</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93707"/>
		<updated>2015-02-14T01:33:24Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
This example shows you how to delete the object &amp;quot;goodbye.txt&amp;quot;. You must specify the bucket as the second parameter of the S3Object.delete function.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file &amp;quot;hello.txt&amp;quot; with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93705</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93705"/>
		<updated>2015-02-14T01:27:20Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Change an object's ACL (access control list) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file &amp;quot;hello.txt&amp;quot; with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
Securing user data from view by any web user is important if you're keeping sensitive information about users. This example shows you how to open the hello.txt object created earlier in my-new-bucket to the public (anyone can access it). This example also shows how to restrict the secret_plans.txt so that no one can access them.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
:'''Note:''''' This deletes the object goodbye.txt''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93704</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93704"/>
		<updated>2015-02-14T01:21:43Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Creating an object */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
Object creation is important to any program. If your app stores data via objects, you can use the S3 server to host them. In this example we create a text file &amp;quot;hello.txt&amp;quot; with content &amp;quot;Hello World!&amp;quot; and save it to my-new-bucket. Note that the :content_type is required for the ruby method S3Object.store.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
:'''Note:''''' This deletes the object goodbye.txt''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93702</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93702"/>
		<updated>2015-02-14T01:19:35Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Forced removal of non-empty buckets */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
If you want to forcibly remove a bucket and dump all of its contents, you can force a deletion as shown below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
:'''Note:''''' This deletes the object goodbye.txt''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93700</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93700"/>
		<updated>2015-02-14T01:18:36Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Deleting a bucket */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
Bucket removal may be necessary when you're trying to reduce the cost of maintaining your data on the S3 servers. This code allows you to delete buckets but only if they're empty.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
:'''Note:''''' This deletes the object goodbye.txt''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93699</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93699"/>
		<updated>2015-02-14T01:17:19Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Listing a bucket's contents */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
For a known bucket (see example for listing all buckets you own), you can also query the server for all objects inside. This code shows you how.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
:'''Note:''''' The target bucket must be empty!''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
:'''Note:''''' This deletes the object goodbye.txt''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93698</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93698"/>
		<updated>2015-02-14T01:15:49Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Listing all buckets you own */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
Buckets hold data about the objects you upload. To access any object, you must access the bucket first. This example shows you how to query the S3 server for a list of all the buckets you own.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
:'''Note:''''' The target bucket must be empty!''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
:'''Note:''''' This deletes the object goodbye.txt''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93697</id>
		<title>CSC/ECE 517 Spring 2015/ch1a 7 SA</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Spring_2015/ch1a_7_SA&amp;diff=93697"/>
		<updated>2015-02-14T01:13:29Z</updated>

		<summary type="html">&lt;p&gt;Achen4: /* Creating a connection to S3 server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:S3.gif|frame|Source: http://www.w7cloud.com/7-reasons-to-use-amazon-s3-cloud-computing-online-storage/|right]] Amazon Simple Storage Service (Amazon S3) is a remote, scalable, secure, and cost efficient storage space service provided by Amazon. Users are able to access their storage on Amazon S3 from the web via REST &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Representational_state_transfer Wikipedia: REST]&amp;lt;/ref&amp;gt; HTTP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP]&amp;lt;/ref&amp;gt;, or SOAP &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/SOAP SOAP]&amp;lt;/ref&amp;gt; making their data accessible from virtually anywhere in the world. Amazon S3 implements redundancy across multiple devices on multiple facilities in order to safeguard against application failure ,data loss and minimization of downtime &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html Amazon S3]&amp;lt;/ref&amp;gt;. Some of the most prominent users of Amazon S3 include: Netflix, SmugMug, Wetransfer, Pinterest, and NASDAQ &amp;lt;ref&amp;gt;[http://aws.amazon.com/s3/ Amazon S3 Homepage]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[https://docs.google.com/a/ncsu.edu/document/d/1TgBtp7flIPKJwkkShgtcIkt--mtHuwVHsQX6Tpzj1rc/edit Writing Assignment 1a]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
Amazon S3 launched in March of 2006 in the United States &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=830816 Amazon Press Release]&amp;lt;/ref&amp;gt; and in Europe in November of 2007 &amp;lt;ref&amp;gt;[http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;amp;p=irol-newsArticle&amp;amp;ID=1072982 Amazon Press Release]&amp;lt;/ref&amp;gt;. Since its inception, Amazon S3 has reported tremendous growth. Beginning in July of 2006, S3 hosted 800 million objects; April of 2007, 5 billion objects; October of 2007, 10 billion; Jan 2008, 14 billion &amp;lt;ref&amp;gt;[http://www.allthingsdistributed.com/2008/03/happy_birthday_amazon_s3.html Happy birthday Amazon S3]&amp;lt;/ref&amp;gt;; October 2008, 29 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-now/ amazon s3 now]&amp;lt;/ref&amp;gt;; March 2009, 52 billion &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/celebrating-s3s-third-birthday-with-an-upload-promotion/ Upload promotion]&amp;lt;/ref&amp;gt;; August 2009, 64 billion &amp;lt;ref&amp;gt;[http://www.eweek.com/c/a/Cloud-Computing/Amazons-Head-Start-in-the-Cloud-Pays-Off-584083 Amazons Head Start in the Cloud]&amp;lt;/ref&amp;gt;. In April of 2013, S3 now hosts more than 2 trillion objects and on average 1.1 million requests every second! &amp;lt;ref&amp;gt;[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-objects-11-million-requests-second/ Two trillion objects]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Design===&lt;br /&gt;
&lt;br /&gt;
S3 is an example of an object storage and is not like a traditional hierarchical file system. S3 exposes a simple feature set to improve robustness and all data in S3 is accessed in the terms of objects and buckets. &lt;br /&gt;
&lt;br /&gt;
====Objects====&lt;br /&gt;
&lt;br /&gt;
Objects are the basic units of storage in Amazon S3. Each object is composed of object data and metadata. S3 supports a size of up to 5 Terabytes per object. Each object has an associated metadata that is used to identify the object. Metadata is a set of name-value pairs that describe the object like date modified. Custom data about the object can be stored in metadata by the user. Every object is identified by a user defined key and is versioned by default. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html S3 Introduction]&amp;lt;/ref&amp;gt;. An object consists of the following - Key, Version ID, Value, Metadata, Subresources and Access Control Information. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html Using Objects]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Buckets====&lt;br /&gt;
&lt;br /&gt;
A bucket is a container for objects and every object must be part of a bucket. Any number of objects can be part of a Bucket. Buckets can be configured to be hosted in a particular region (US, EU, Asia Pacific etc.) in order to optimize latency. S3 limits the number of buckets per account to 100. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html Using Bucket]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Keys and Metadata====&lt;br /&gt;
&lt;br /&gt;
An user specifies a key on object creation which is used to uniquely identify the object in the bucket. Keys for the objects can be at most 1024 bytes long.&lt;br /&gt;
&lt;br /&gt;
There are two kinds of metadata for an object - System metadata and Object metadata. System metadata is used by S3 for object management. For eg. - Data, Content-Type etc. are stored as System metadata. Object metadata is optional and can be used by the user to add additional metadata to the objects during object creation. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html Using Metadata]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Regions====&lt;br /&gt;
&lt;br /&gt;
Regions allow a user to specify the geographical region where the buckets will be stored. This can be used to optimize latency and minimizing costs.&lt;br /&gt;
S3 supports the following regions - US Standard, US West (Oregon) region, US West (N. California) region, EU (Ireland) region, EU (Frankfurt) region, Asia Pacific (Singapore) region, Asia Pacific (Tokyo) region, Asia Pacific (Sydney) region, South America (Sao Paulo) region &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions Regions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Versioning====&lt;br /&gt;
&lt;br /&gt;
All objects in S3 are versioned by default and it can be used to retrieve and restore every version of an object in a bucket. Every change to an object(create, modify, delete) results in a separate version of the object which can be later used for restoring or recovery. Versioning is done at the bucket level and not for individual objects. It can be turned off or on per bucket but a versioned-enabled bucket cannot be turned to an unversioned bucket. Versioning can only be paused in these cases. &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html Versioning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Access Permissions====&lt;br /&gt;
&lt;br /&gt;
All resources(buckets,objects etc) are private in Amazon S3 by default. Only the resource owner can access the resource and can grant access to other users to accesss the resource. There are two types of access policies in S3 - Resource-based and user policies. Resource-based policies are attached to a particular resource and user policies are assigned to a particular user.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html Access Control]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Protection====&lt;br /&gt;
&lt;br /&gt;
Objects are redundantly stored on multiple devices across multiple facilities within a region for durability. To improve durability, write requests do not return success before storing the data across multiple facilities. Also checksums are used to verify data integrity. If any corruption is detected, it is repaired using redundant data.&amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html Data Durability]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Ruby and Amazon S3===&lt;br /&gt;
&lt;br /&gt;
Amazon Web Services (AWS) provides an SDK &amp;lt;ref&amp;gt;[http://rubygems.org/gems/aws-sdk| download]&amp;lt;/ref&amp;gt; that works with Ruby for many amazon webservices, including Amazon S3. Developers new to the Amazon AWS SDK should begin with version 2 as it includes many built in features such as waiters, automatically paginated responses, and a streamlined plugin style architecture. Version 2 of the SDK has 2 &amp;quot;packages&amp;quot;, also referred to as &amp;quot;gems&amp;quot; &amp;lt;ref&amp;gt;[http://en.wikipedia.org/wiki/RubyGems Wikipedia: RubyGems]&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
:* '''aws-sdk-core''' - provides a direct mapping to the AWS APIs including automatic response paging, waiters, parameter validation, and Ruby type support&lt;br /&gt;
:* '''aws-sdk-resources'''  - provides an object-oriented abstraction over low-level interfaces in the core to reduce the complexity of utilizing core interfaces; resource objects reference other objects such as an Amazon S3 instance and the attributes and actions as instance variables and methods.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It should also be noted that there exists a Version 1 of the aws sdk that lacks some &amp;quot;convenience features&amp;quot; otherwise available in version 2 of the sdk. For more information see the &amp;lt;ref&amp;gt;[http://ruby.awsblog.com/post/Tx2OMCYFEZX2I6A/AWS-SDK-for-Ruby-V2-Preview-Release|AWS Ruby Development Blog]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
The following section contains examples of ruby interfacing with S3. Sources and documentation for the code are provided. Please observe the copyrights if you choose to use any or all of the posted code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note:''' There are 3 key classes in AWS SDK &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingTheMPRubyAPI.html Using the Ruby API]&amp;lt;/ref&amp;gt; -&lt;br /&gt;
&lt;br /&gt;
:* '''AWS::S3''' - Denotes an interface to Amazon S3 for the Ruby SDK. It has the ''#buckets'' instance method for creating new buckets or accessing existing buckets.&lt;br /&gt;
:* '''AWS::S3::Bucket''' - Denotes an Amazon S3 Bucket. It provides the ''#objects'' instance method to access existing objects and also other methods to get information about a bucket.&lt;br /&gt;
:* '''AWS::S3::S3Object''' - Denotes an Amazon S3 Object. It provides the method that gives information about the object and also setting access permissions, copying, deleting and uploading objects.&lt;br /&gt;
&lt;br /&gt;
===Creating a connection to S3 server===&lt;br /&gt;
&lt;br /&gt;
Connecting to the S3 server is the essential starting point of accessing your data. The follow example shows how to connect to the server via. SSL &amp;lt;ref&amp;gt;Wikipedia: SSL http://en.wikipedia.org/wiki/SSL&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Base.establish_connection!(&lt;br /&gt;
        :server            =&amp;gt; 'objects.example.com',&lt;br /&gt;
        :use_ssl           =&amp;gt; true,&lt;br /&gt;
        :access_key_id     =&amp;gt; 'my-access-key',&lt;br /&gt;
        :secret_access_key =&amp;gt; 'my-secret-key'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing all buckets you own===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Service.buckets.each do |bucket|&lt;br /&gt;
        puts &amp;quot;#{bucket.name}\t#{bucket.creation_date}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Expected output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mybuckat1   2011-04-21T18:05:39.000Z&lt;br /&gt;
mybuckat2   2011-04-21T18:05:48.000Z&lt;br /&gt;
mybuckat3   2011-04-21T18:07:18.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Listing a bucket's contents===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
new_bucket = AWS::S3::Bucket.find('my-new-bucket')&lt;br /&gt;
new_bucket.each do |object|&lt;br /&gt;
        puts &amp;quot;#{object.key}\t#{object.about&amp;lt;ref&amp;gt;['content-length']}\t#{object.about&amp;lt;ref&amp;gt;['last-modified']}&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected output'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
file1.filex 251262  2011-08-08T21:35:48.000Z&lt;br /&gt;
file2.filex 262518  2011-08-08T21:38:01.000Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting a bucket===&lt;br /&gt;
:'''Note:''''' The target bucket must be empty!''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Forced removal of non-empty buckets===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::Bucket.delete('my-new-bucket', :force =&amp;gt; true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Creating an object===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.store(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'Hello World!',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :content_type =&amp;gt; 'text/plain'&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Change an object's ACL (access control list)===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[ AWS::S3::ACL::Grant.grant(:public_read) ]&lt;br /&gt;
AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)&lt;br /&gt;
&lt;br /&gt;
policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')&lt;br /&gt;
policy.grants = &amp;lt;ref&amp;gt;[]&lt;br /&gt;
AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Download an object to a folder===&lt;br /&gt;
:'''Note:''' ''This downloads the object poetry.pdf and saves it in /home/larry/documents/''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
open('/home/larry/documents/poetry.pdf', 'w') do |file|&lt;br /&gt;
        AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|&lt;br /&gt;
                file.write(chunk)&lt;br /&gt;
        end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Deleting an object===&lt;br /&gt;
:'''Note:''''' This deletes the object goodbye.txt''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Generating object download urls===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'hello.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :authenticated =&amp;gt; false&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
puts AWS::S3::S3Object.url_for(&lt;br /&gt;
        'secret_plans.txt',&lt;br /&gt;
        'my-new-bucket',&lt;br /&gt;
        :expires_in =&amp;gt; 60 * 60&lt;br /&gt;
)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Expected Output:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/hello.txt&lt;br /&gt;
http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&amp;amp;Expires=1316027075&amp;amp;AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Source: http://ceph.com/docs/master/radosgw/s3/ruby/&lt;br /&gt;
&lt;br /&gt;
===Upload a file to Amazon S3===&lt;br /&gt;
:As per the Apache License v 2.0, the follow code is reproducible and redistributable  with the following &amp;lt;ref&amp;gt;[http://aws.amazon.com/apache-2-0/ license]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright 2011-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.&lt;br /&gt;
#&lt;br /&gt;
# Licensed under the Apache License, Version 2.0 (the &amp;quot;License&amp;quot;). You&lt;br /&gt;
# may not use this file except in compliance with the License. A copy of&lt;br /&gt;
# the License is located at&lt;br /&gt;
#&lt;br /&gt;
#     http://aws.amazon.com/apache2.0/&lt;br /&gt;
#&lt;br /&gt;
# or in the &amp;quot;license&amp;quot; file accompanying this file. This file is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF&lt;br /&gt;
# ANY KIND, either express or implied. See the License for the specific&lt;br /&gt;
# language governing permissions and limitations under the License.&lt;br /&gt;
&lt;br /&gt;
require 'aws-sdk'&lt;br /&gt;
&lt;br /&gt;
(bucket_name, file_name) = ARGV&lt;br /&gt;
unless bucket_name &amp;amp;&amp;amp; file_name&lt;br /&gt;
  puts &amp;quot;Usage: upload_file.rb &amp;lt;BUCKET_NAME&amp;gt; &amp;lt;FILE_NAME&amp;gt;&amp;quot;&lt;br /&gt;
  exit 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
# get an instance of the S3 interface using the default configuration&lt;br /&gt;
s3 = AWS::S3.new&lt;br /&gt;
&lt;br /&gt;
# create a bucket&lt;br /&gt;
b = s3.buckets.create(bucket_name)&lt;br /&gt;
&lt;br /&gt;
# upload a file&lt;br /&gt;
basename = File.basename(file_name)&lt;br /&gt;
o = b.objects[basename]&lt;br /&gt;
o.write(:file =&amp;gt; file_name)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;Uploaded #{file_name} to:&amp;quot;&lt;br /&gt;
puts o.public_url&lt;br /&gt;
&lt;br /&gt;
# generate a presigned URL&lt;br /&gt;
puts &amp;quot;\nUse this URL to download the file:&amp;quot;&lt;br /&gt;
puts o.url_for(:read)&lt;br /&gt;
&lt;br /&gt;
puts &amp;quot;(press any key to delete the object)&amp;quot;&lt;br /&gt;
$stdin.getc&lt;br /&gt;
&lt;br /&gt;
o.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See the following link for the documentation for AWS SDK - &amp;lt;ref&amp;gt;[http://docs.aws.amazon.com/AWSRubySDK/latest/_index.html AWS SDK for Ruby]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Achen4</name></author>
	</entry>
</feed>