Wednesday, May 27, 2020

Backing Up Data In A Chrome Extension (Ears Audio Toolkit)

If you have a Chrome Extension with some state you'd like to back up for posterity, then this trick might work for you. In this case, I'm going to back up Ears Audio Toolkit, an amazing audio Equalizer that allows you to save equalizer presets for every audio device you have.

  • Right-click the extension
  • Choose inspect
  • In the left panel, select Application->Local Storage->chrome-extension://....
From here down.. the directions will depend on the extension. For Ears Audio Toolkit:
  • find the PRESETS key
  • Double click the JSON representing the value of PRESETS
  • COPY/PASTE the giant JSON blob.
Ears Audio Toolkit, offers a $1/month subscription to synchronize your data for you, but it doesn't really support capturing this data for your own personal backup and even versioning it in git. That being said, I strongly support throwing $5 or more at the author to show your support since this effectively bypasses the tool's business model. Also, since this is not really a supported trick, there's no guarantee the author won't change the format of the JSON blob and break this strategy.

Sunday, October 29, 2017

Cox Data Usage Charges

Recently, Cox, a city sanctioned monopoly, has begun charging users for data usage over 1TB in my town, Santa Barbara.  While I don't actually believe that charging users for data usage is wrong, I do believe that there is an actual fair market value for data and that Cox is overcharging dramatically - in an environment where it has a government sanctioned monopoly.  This, I believe is wrong.

First, let's break down how I see pricing for data connections working into two groups: Fixed costs and Variable Costs.

Fixed Costs

The primary fixed cost for home network connections are the physical connection between the internet provider and the home and the other networking equipment required to make the connection.  This would be similar to the electrical lines that connect your home to the grid and the grid infrastructure necessary to transmit electricity from a generator to a home.  Similar to the power grid, there is not really an increase in cost here when a customer consumes more power unless the customer has specialized requirements that require some sort of upgrade - extremely uncommon.

Variable Costs

Internet service providers have to connect their users to the rest of the Internet.  Fundamentally, a residential internet service provider has one or more agreements with other internet service providers to ensure that any computer on the internet can talk to any other computer on the internet.  Cox, for instance likely has arrangements with companies such as Level3 and Cogent who can connect Cox to other service providers such as Comcast and Time Warner.  Internet companies like Google and Facebook also have agreements with the same providers (Level3 and Cogent).  Unlike residential providers, these providers must compete for business and companies who connect to them can distribute traffic across a variety of them, balancing it throughout the day to bring costs down.  These costs continue to shrink every year.

A more detailed writeup of these fixed and variable costs can be found at this very good Broadband Now article.

So, what is wrong with what Cox is doing?

Cox's pricing is here.  I find this pricing suspicious given that Google Fiber is able to provide substantially better service at a lower cost in Kansas City.  But, setting aside the fact that Cox charges more for less to all subscribers.  Let's look at what Cox charges to people who use over 1TB of data to, say, restore their computers using an Online backup service.  $10/50GB, or roughly $0.20/GB.  These costs should reflect only changes in variable costs, and not any fixed costs which are incurred in the over $70/month Cox charges customers after all fees.  Google, Microsoft, and Amazon charge between $0.087/GB and $0.12/GB - or roughly 1/2 the cost of what Cox charges, and these providers are including their fixed costs in these charges.  Using only 1GB of data in a month with Google/Amazon/Microsoft costs about a dime.  Using only 1GB of data in a month with Cox costs over $70.  Let's assume that this discrepancy is because Cox has higher fixed costs.  That means that those fixed costs are covered any any additional charges due to increased variable costs should be somewhere near market value for network transit fees.  Cox is charging at over 10x those fees (more on this later).

That's only a dime.  What does it matter?

If a home user wants to restore a 3 TB hard disk from a backup, Cox will charge that person an extra $15.  That's a lot of money just to restore a backup!  This pricing also will act as a mechanism to deter people from streaming video from Cox's competitors, Youtube, Netflix, etc.  Once you hit that cap, Cox will charge you $10 for every 16 hours of Netflix you watch in 4k.  Netflix charges $12/month for the ability to stream as much 4k video you want and that includes both the cost of paying licensing fees for the movies and paying their internet provider (cogent or level3) to ship the movie over the network.  That's just a couple of movies a week before you're paying more money to Cox than you are to Netflix!

How much should Cox charge then?

Here's what Amazon, Microsoft, and Google charge:

These providers, including Cox, are likely only paying less than $.01/GB as of 2011 in a market where prices have been falling consistently for decades.  Without transparency from Cox, it's hard to say what their bandwidth acquisition costs are, and Cox should be allowed to turn a profit.  But I believe anything more than $0.03/GB (billed in 1GB blocks, not 50GB blocks) highly suspicious.  Again, this reflect a 3x profit margin given what data I have been able to find already.  Cox is currently charging between $0.20 and $10.00/GB depending on how much of that 50GB block is used.

What should be done?

I would like for the Santa Barbara city council to publicly work with Cox to correct this abuse on its monopoly privileges in our community.  My personal feedback is that the city should take measures to allow changes in policies to our sanctioned monopolies re-open negotiations between the monopoly and the city to ensure that the tax-paying consumer is treated fairly.  I would hope other jurisdictions do so as well.

Friday, September 15, 2017

Git: Resetting a remote branch to a specific hash without a force push

I had a series of commits (including merges) on a branch that I wanted to roll back quickly.  I wasn't able to find any help for this problem that didn't either providing git a bunch of help navigating trees with the git revert -m command or using reset and a force push.  Here's a trick that's very similar to the reset strategy but retains all of the history:

> git reset --hard THE_HASH_YOU_WANT_TO_RETURN_TO # That's our good commit > git rebase -i origin/master # During the rebase, I squashed all but the top commit to make it one giant commit. # Gives us a single commit with all of the things that changed since the good commit. That commit was HASH_OF_ALL_CHANGES_SINCE_GOOD_COMMIT > git revert HASH_OF_ALL_CHANGES_SINCE_GOOD_COMMIT # That makes a negative commit of that one giant commit named REVERT_OF_ALL_CHANGES_SINCE_GOOD_COMMIT > git reset --hard origin/master # Back to reality > git cherry-pick REVERT_OF_ALL_CHANGES_SINCE_GOOD_COMMIT # applies a change that reverts all changes since THE_HASH_YOU_WANT_TO_RETURN_TO
After that, just push!

Saturday, December 31, 2016

Portable Minimal Emacs Clone For Linux

I have the benefit of working in an organization that provides quite a bit of autonomy, but even still the life of a programming inside of a fortune 500 company means I'm going to be editing text files on machines for which I can't install software.  Sometimes, that means my editor choices are limited to nano and vim, and I prefer emacs.  Typically, I just want an editor that responds to my muscle memory without thinking too hard.  So, a simple editor that feels like emacs (maybe with a dired mode and a shell) is good enough.  Bonus: it's best if I can just wget the editor and use it.  Enter QEmacs!
  1. Download the latest version from the website.
  2. Extract it somewhere.
  3. use ./configure --disable-x11 --disable-xv --disable-html --disable-png
  4. Run make.
  5. Now you'll have a binary called qe.  Read the docs here.
If you have any trouble building, here's my 272k binary.  Let me know if it works for you.

Saturday, December 17, 2016

Deploying Scala Jar Files into AWS Lambda

This blog post is a text-representation of a lightning talk I gave at a monthly meeting at work. A video recording is available here.

AWS Lambda is a product that allows you to upload code, configure a "trigger" for that code, and run the code in Amazon's infrastructure and be billed in 100ms increments for the compute resources.  Read more about it here.  In this article let's take a look at how we can put some Scala code into AWS.

Set Up Environment

  • Log into the AWS Management Console and navigate to IAM.  
  • Click Users->Find your username, and click on it (or create one)
  • Click Security Credentials
  • Click Create Access Key
  • Note the Access Key and Secret Access Key, you'll need these.
While you're here, let's make an execution role
  • Click Roles
  • Click Create New Roles
  • I'm going to use the name "lambda_basic_execution"
  • Choose "select" next to AWS Lambda
  • Create a policy
  • Find and select AWSLambdaBasicExecutionRole
Ok, that's enough AWS stuff.  Let's get your laptop environment going...

Install the AWS Command Line Interface.  On a mac with brew, I just typed "brew install awscli".

Now configure the aws command line interface:

$ aws configure AWS Access Key ID [****************PEPQ]: AWS Secret Access Key [****************+iVg]: Default region name [us-west-1]: Default output format [json]:

The Code

Next, we need to create a project.  Create a normal maven project, but be sure to include the following two additions.  A plugin:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.3</version> <configuration> <createDependencyReducedPom>false</createDependencyReducedPom> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> </execution> </executions> </plugin>
And a dependency:
<dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-lambda-java-core</artifactId> <version>1.0.0</version> </dependency>

There is a poorly documented requirement that you return a specially formatted Java Map that represents the HTTP response.  This is necessary because not all AWS Lambda functions speak HTTP, but this one does.  The Map will be serialized using Jackson by Amazon, so we need to return Java POJO's and Java Collections.  Since we're writing Scala, things get a little ugly.  Here's a little helper package object to make that easier:
package object gatewayResponse { def response(response:ResponseCode, body:String, headers:Map[String, String] = Map()):java.util.Map[String, Object] = { mapAsJavaMap(Map( "statusCode"-> response.boxed(), "headers"-> mapAsJavaMap(headers).asInstanceOf[java.util.Map[java.lang.String, java.lang.String]], "body"-> body) ).asInstanceOf[java.util.Map[java.lang.String, java.lang.Object]] } case class ResponseCode(value:Int){ def boxed() = { } } val OK = ResponseCode(200) }

And, finally, our lambda function:
package com.cj.demo import; import com.cj.lambda.gatewayResponse._ class Demo{ def myHandler(context:Context ) = { response(OK, "David Says Hello") } }


To build the code, just run:
mvn package

And to deploy the lambda to AWS the first time, just run the following:
$ aws lambda create-function \ > --function-name scala-lambda-demo \ > --runtime java8 \ > --role arn:aws:iam::727586729164:role/lambda_basic_execution \ > --handler com.cj.demo.Demo::myHandler \ > --zip-file fileb://target/scala-lambda-demo-1.0.jar


One last one-time step we need to follow is to create an HTTP endpoint and link it to AWS.  Do the following:

  • Navigate once again to the AWS Console.
  • Navigate to the API Gateway
  • Click "Create API"
  • Give the API a name and click "Create API"
  • Create a resource, and then inside that resource create a method of type "get".
  • Integration type=lambda, use lambda proxy integration, make the region match your previously chosen region, and the lambda function you deployed should auto-complete when you start typing its name.  If it doesn't auto-complete, it means that you've either chosen the wrong region or that something went wrong when you used the aws command to create the function.

Now, click actions->deploy API and you should see the following screen:

On the next screen, you will be given your HTTP endpoint:

Just take that "Invoke URL" and append the resource name to the end, and it should take you to the results of running your Scala code.  If you see {"message":"Missing Authentication Token"}, don't forget to add the name of your resource to the URL!

That's it. You've create an HTTP endpoint and linked a "hello world" jar in AWS Lambda and written in Scala to that HTTP endpoint. If you want to add features to your application, just repeat the mvn package step and the use the following code to update the lambda function:
aws lambda update-function-code \ --function-name scala-lambda-demo \ --zip-file fileb://target/scala-lambda-demo-1.0.jar

Happy Coding!

Friday, November 11, 2016

Deploying Node JS Apps to Amazon Lambda In 5 Minutes:

AWS Lambda is a bit of a game-changer for developers who don't have the means, or the desire to mess with devops, even in the simplified world of containers.  We just write some code, put it somewhere. When a request comes in, resources are automatically allocated to running that code, the code is executed, and then the resources are deallocated.  In Lambda, we are only billed when our software is actually run.  The costs can be almost free to host websites with very little traffic.  Unfortunately, these things are not easy to start up and configure using the tools Amazon provides.  But, and open source utility called serverless has makes it really easy!

Here's how:
  • Signup / Log into Amazon AWS
  • Create an IAM AWS Access Key ID and AWS Secret Access Key here. You'll need these in a second.
  • npm install -g serverless
Run the following on the command line:
$ export AWS_ACCESS_KEY_ID=YOUR_KEY_ID_HERE       $ export AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_KEY_HERE $ mkdir teset&& cd test $ serverless create --template aws-nodejs

Edit the serverless.yaml file and uncomment the following lines 
    events:       - http:           path: users/create           method: get

And, run:
serverless deploy
After a few seconds, your server's URL should be printed.  Just edit handler.js to add functionality to your application.

Tuesday, October 25, 2016

Upgrading a Netgear Orbi - Solution to "This firmware file is incorrect!"

I just got a brand new Netgear Orbi router with extender (called a satellite) this weekend.  When navigating to the firmware upgrade page, the UI indicated that I had an upgrade available for the base station, but the satellite unit was stuck in "please wait" mode.  So, I decided to try upgrading the firmware manually.  I read online that you should always upgrade the satellite first, so I downloaded the firmware for both, logged into my router, and attempted to upgrade.  I was then presented with the following error message when attempting to load the firmware for the RBS50 (the satellite):

The firmware file is incorrect!  Please get the firmware file again and make sure it is correct firmware for this product.
After a little poking around, I realized that my satellite appears in the list of "attached devices" and when I connect directly to the satellite, there is a different interface for upgrading the firmware of that device.  So, I used that interface to upgrade just the satellite (the RBS50).  Once finished, I could upgrade the firmware of the Orbi router using the automatic upgrade feature.