Do you have a host on your network using up all the bandwidth? Maybe an offsite backup device keeps sending GB's of data offsite?
In my case, I deployed an AWS Storage gateway in file gateway mode. This works perfect for my use case but there's a couple of "Gotcha's" with the AWS file gateway versus a volume or tape library.
The main one being that you cannot schedule or rate limit the file gateway directly within the AWS Console. This is simply not built into the solution. In order to ensure the upload did not use all the bandwidth on the network, I used rate limiting on the Cisco ASA. Below is the configuration.
First step is to create an ACL for the host (In this example the file gateway is 188.8.131.52). I am allowing all IP traffic to and from this host.
access-list AWS-FileGW-Throttle extended permit ip host 184.108.40.206 anyaccess-list AWS-FileGW-Throttle extended permit ip any host 220.127.116.11
Next you need to create a class map to match the ACL that was just created.
class-map CM-AWSTHROTTLEmatch access-list AWS-FileGW-THROTTLEexit
After that you need to configure a policy map which the ASA will use to police the traffic. In this example I am rate limiting the host to 10 mpbs with a burst rate of 1 mbps.
policy-map PM-AWSTHROTTLEclass-CM-AWSTHROTTLEpolice output 10000000 1000000police input 10000000 1000000exit
Once these steps are complete you need to tie a service policy to the inside interface.
service-policy PM-AWS-THROTTLE interface inside
That's it! Now go and check your network monitoring software/netflow and you'll see it is working!