AWS Storage

Rapid Changing Data Immediate Access File System Structured Data with query Archival data Dynamic Web Highly Durable Static Web Temporary Persisent Relational database Shared Snapshot
EBS (Elastic Block Storage Yes Yes Yes Yes Yes
RDS Yes Yes
DynamoDB Yes Yes
EC2 Yes Yes Yes Yes Yes
S3 Yes Yes Yes Yes Yes
EFS Yes Yes
CloudSearch Yes
Glacier Yes

Amazon EFS

What Is Amazon Elastic File System?

Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.

Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, avoiding the complexity of deploying, patching, and maintaining complex file system deployments.

Amazon EFS supports the Network File System version 4.1 (NFSv4.1) protocol, so the applications and tools that you use today work seamlessly with Amazon EFS. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for workloads and applications running on more than one instance or server.

Amazon EFS Pricing

With Amazon EFS, you pay only for the amount of file system storage you use per month. There is no minimum fee and there are no set-up charges. There are no charges for bandwidth or requests.

Pricing

US East (N. Virginia) $0.30/GB-month
US East (Ohio) $0.30/GB-month
US West (Oregon) $0.30/GB-month
EU (Ireland) $0.33/GB-month
Asia Pacific (Sydney) $0.36/GB-month

For example, assume your file system is located in the US East (N. Virginia) region, uses 100GB of storage for 15 days in March, and 250GB of storage for the final 16 days in March.

At the end of March, you would have the following usage in GB-Hours:

Total usage (GB-hours) = [100 GB x 15 days x (24 hours / day)] + [250 GB x 16 days x (24 hours / day)]

= 132,000 GB-Hours

We add up GB-Hours and convert to GB-Months to calculate monthly charges:

Total Monthly Storage Charge = 177 GB-Months x $0.30 = $53.10

 

 

Windows Servers 2012 and 2016 do not include an NFSv4 client. They do include an NFSv4.1 server, however, the client is NFS v2 or v3 only. To mount an EFS file system the client needs to support NFSv4. As a result, EFS file systems can’t be accessed from Windows Server 2012 or 2016.

We understand the importance of Windows support, and we are taking your feedback into account along with customer requests for other features.

KVM vs Xen vs VirtualBox

Ubuntu 15.10: KVM vs. Xen vs. VirtualBox Virtualization Performance
Written by Michael Larabel in Software on 22 October 2015. Page 1 of 5. 32 Comments

Our latest benchmarks of Ubuntu 15.10 are looking at the performance of this latest Linux distribution release when comparing the performance of guests using KVM, Xen, and VirtualBox virtualization from the same system.

The tests were all done from an Intel Xeon E5-2687W v3 + MSI X99S SLI PLUS system with 16GB of DDR4 system memory, 80GB Intel M.2 SSD, and AMD FirePro V7900 graphics. Once running our disk and processor focused benchmarks on this Ubuntu 15.10 host system, the “bare metal" results were then compared to a KVM guest setup via virt-manager using the Ubuntu Wily packages, then using the Xen 4.5 packages present on Ubuntu 15.10 with again testing the same Ubuntu 15.10 guest with virt-manager, and then lastly testing the Ubuntu 15.10 guest under VirtualBox 5.0.4 as available via the package archive.

The host system and all guests were using Ubuntu 15.10 64-bit with the Linux 4.2.0-16-generic kernel, Unity desktop, X.Org Server 1.17.2, an EXT4 file-system, and GCC 5.2.1 as the code compiler. All system settings remained the same during the testing process.

When creating the KVM/Xen/VirtualBox guests, 38GB of virtual storage on the Intel SSD, 8GB of RAM and all 20 CPU threads (ten core CPU + HT) were made available to the virtual machine under test. As each platform has near endless tunable possibilities, the testing was done with the default settings. If there’s enough interest and premium support from Phoronix readers, I can look at putting out some “tuned" VM comparisons.

All of the benchmarks for this article were carried out via the open-source Phoronix Test Suite benchmarking software.

First up are some of the disk benchmarks for the host system and then the KVM/Xen/VirtualBox guests

With VirtualBox’s reported result outperforming the bare metal system, it looks like VirtualBox in its default configuration for this SQLite database benchmark isn’t fully honoring fsyncs to the disk compared to KVM and Xen with the performance being too good. While KVM was the slowest, if you’re doing a lot of write-intensive work in your VMs, you are better off letting the VM access a raw partition rather than just setting up a virtual disk.

With the random and sequential writes, VirtualBox was reported to be faster than KVM and Xen but again may be a matter of its different behavior.

When it comes to sequential reads, VirtualBox was slower than the competition.

Dbench is another disk benchmark showing VirtualBox’s different behavior of apparently not writing everything out to the disk during testing with the reported performance being too good.

Exporting And Importing Munin Graph Data

Exporting And Importing Munin Graph Data

Sunday, July 21, 2013 – 20:38

When Munin does a data update it stores all of the data from the nodes as a set of rrd files. These files are then picked up by the munin-graph and munin-html programs and turned into the graph images and web pages that you are probably familiar with if you use Munin.

The default location for Munin to store these data files is within the directory /var/lib/munin. Each group you define in your config is given it’s own sub directory and the rrd data files for all servers within each group are kept within that directory. If you kept the default Munin config file you will probably have a directory called localhost which will contain all of the rrd files for your Munin server.

If you want to create a backup solution for Munin then you only need to backup these directories, and perhaps the munin.conf file in /var/lib/munin. If you have any problems then you can just restore these directories back to where they were and Munin will pick up the data automatically when it runs the next update. You don’t need to keep hold of the HTML pages as these are completely rewritten when an update is run.

This is fine when restoring the data files back to the same machine, but you might find you have problems if you try to move these data files from one server to another. When I did this Munin generated the following error in it’s munin-update.log file.

Munin: This RRD was created on another architectureThe problem was that I had moved these files from a 32 bit system to a 64 bit system and this means that the file was unreadable by the new machine due to the different architecture. The rrd files used by Munin are in a binary format, and it is not possible to simply alter them using an editor. The solution here is to use a tool called rrdtool to convert the rrd files into XML files on the old server and then import them on the new server. The rrdtool tool is used to interact with rrd files in certain ways and in this case we use the dump flag to convert the binary files into a the more readable format of XML.

The following command is used to convert all of the rrd files within the localdomain folder into the new xml format. For completeness I also include the creation of the destination directories.

1
2
3
cd /var/lib/munin/localdomain
mkdir ~/munin/localdomain
for i in ./*.rrd;do rrdtool dump $i ~/munin/localdomain/$i.xml;done

You will need to do the same thing for each of the groups you have in your Munin setup.

Once you can converted these files you can back them up or even just move then to the destination server. The following command will take a directory of XML files (as created by rrdtool) and convert them into binary rrd files. This time we are using the restore rrdtool flag. Again, for completeness I have included some commands to delete the current Munin rrd directory (otherwise the tool will error), and to set the correct permissions on the directory afterwards.

1
2
3
4
5
6
7
8
rm -rf /var/lib/munin/localdomain
mkdir /var/lib/munin/localdomain
 
cd /home/kubuntu/munin/localdomain
for i in ./*.xml; do rrdtool restore "$i" "/var/lib/munin/localdomain/${i%.xml}"; done
 
chmod -R 766 /var/lib/munin
chown -R munin:munin /var/lib/munin

With these two scripts you should be able to backup any data you have in your Munin install, or migrate your install to any architecture.

WAF ADC Storage

http://searchnetworking.techtarget.com/tip/Storage-virtualization-benefits-and-challenges-A-primer

按一下以存取 dcf-dcx-vmware-refarch-ga-ab-071-01.pdf

http://searchstorage.techtarget.com/report/Common-places-for-data-storage-bottlenecks-in-your-IT-infrastructure

http://searchnetworking.techtarget.com/tip/Application-delivery-controllers-Moving-toward-the-application-centric-network

http://searchnetworking.techtarget.com/tip/Can-application-delivery-controllers-support-virtualization

http://searchsdn.techtarget.com/tip/SDN-load-balancer-debate-Controller-or-ADC

http://searchsdn.techtarget.com/essentialguide/Understanding-the-basics-of-bare-metal-switches

http://whatis.techtarget.com/definition/hyperscale-computing

http://searchnetworking.techtarget.com/news/4500273610/Citrix-F5-tackle-wireless-data-traffic-with-mobile-ADC

http://searchnetworking.techtarget.com/resources/Application-Acceleration-and-Server-Load-Balancing

 

 

Latency Testing

http://apmblog.dynatrace.com/2014/06/10/understanding-application-performance-on-the-network-part-i-a-foundation-for-network-triage/

http://apmblog.dynatrace.com/2014/06/19/understanding-application-performance-on-the-network-bandwidth-and-congestion/

http://apmblog.dynatrace.com/2014/06/26/understanding-application-performance-on-the-network-tcp-slow-start/

http://apmblog.dynatrace.com/2014/07/03/understanding-application-performance-on-the-network-packet-loss/

http://apmblog.dynatrace.com/2014/07/10/understanding-application-performance-on-the-network-processing-delays/

http://apmblog.dynatrace.com/2014/07/24/understanding-application-performance-on-the-network-the-nagle-algorithm/

http://apmblog.dynatrace.com/2014/08/12/understanding-application-performance-network-part-tcp-window-size/

 

Comparison of HP 3PAR Online Import and Dell/EMC SANCopy

In the market each storage vendor has their unique technology features for data migration. For example, Dell/EMC vPLEX Encapsulation, MirrorView/S/A, SANCopy, HP 3PAR Online Import and 3PAR Peer Motion etc. Today we will discuss the difference between Dell/EMC SANCopy and HP 3PAR Online Import, and list out their advantage and disadvantage.The following diagram is the detail architecture for data migration by EMC SANCopy and HPE 3PAR Online Import.

 

The architecture diagram for migration host by EMC SANCopy;

  • Source Array – HP 3PAR StoreServ 7200 (OS 3.2.2)
  • Target Array – EMC VNX5200 (VNX OE 33)
  • SAN Switch – 2 x Brocade DS-300B
  • Migration Host – Micrsoft Windows 2008 R2
  • Migration Method – EMC SANCopy (Push Mode)

 

FSM_Diagram.png

 

Execute the data migration by SAN Copy Create Session Wizard on EMC Unisphere.

 

untitled.png

 

The architecture diagram for migration host by HP 3PAR Online Import.

 

  • Source Array – EMC VNX5200 (VNX OE 33)
  • Target Array – HPE 3PAR StoreServ 7200 (OS 3.2.2)
  • Migration Host – Micrsoft Windows 2008 R2
  • Migration Management Host – HP 3PAR Online Import Unity 1.5 & EMC SMI-S provider 4.6.2
  • SAN Switch – 2 x Brocade DS-300B
  • Migration Method – HP 3PAR Online Import

FSM_Diagram2.png

 

Execute the data migration by HP 3PAR Online Import Utility CLI Commands.

addsource -type CX -mgmtip x.x.x.x -user <admin> -password <password> -uid <Source Array’s WWN>
adddestination –mgmtip x.x.x.x –user <admin> –password <password>
createmigration -sourceuid <Source Array’s WWN> -srchost <Source host> -destcpg <Target CPG> -destprov thin -migtype MDM -persona “WINDOWS_2008_R2”

 

10-.png

 

Below table is the comparison of EMC SANCopy and HP 3PAR Online Import;

 

table.png

 

And the following is the pros and cons of each migration method.

 

EMC SANCopy

Pros:

  • It can be migrated each source LUN to the target array one by one.
  • Any FC ports can be configured as SANCopy port on each Storage controller, and SANCopy port and host port can be running at the same time.
  • All migration operation can be executed on EMC Unisphere (VNX management server), optional migration server installation is not required.
  • SANCopy license is bundled on VNX storage.

Cons:

  • SANCopy is not supported incremental mode if the source array is 3rd party model.

 

HP 3PAR Online Import

Pros:

  • The destination HP 3PAR StoreServ Storage system must have a valid HP 3PAR Online Import or HP 3PAR Peer Motion license installed. By default it has 180 days Peer Motion temporary license installed.

Cons:

  • Each migration definition cannot be migrated each source LUN to the target array one by one, ie it will migrate all LUNs to target array if it has 3 three LUNs on EMC Storage Group when it starts migration session.
  • All migration definition can only be executed on 3PAR Online Import Unity which is the other management host for data migration.

 

https://community.emc.com/message/964379#964379