tag:blogger.com,1999:blog-26439232221333202972024-03-05T01:58:21.925-08:00David RonDavid Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.comBlogger214125tag:blogger.com,1999:blog-2643923222133320297.post-32445394878072705622022-04-22T11:49:00.001-07:002022-04-22T11:53:50.749-07:00Moving To a Correct Known Location in a Bash Script<p>Sometimes you need to make sure that a bash script is run from a specific location. One way to do that is to keep doing a cd .. until getting to a known good spot in a tree. This BASH code will iteratively move up a folder until a folder containing a specific subfolder is reached and stop.
<code>
while [[ $PWD != / ]] ; do
[ -d "$PWD/invoca_ctf_2022" ] && echo "$PWD" && break
cd ..
done
if [[ $PWD == / ]] ; then exit 0; fi;
</code>David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-20769885047927227062020-07-10T19:32:00.003-07:002020-07-10T20:02:38.979-07:00Java and Gradle Continuous Integration Builds Using Github Actions<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSA5H8hgNQJaLecEfkV0nsnv_iPkHNcsgzZrh-qAQWFpjxpKePclQmJrcgLZuiiav9zl8xcK3xRvEfccNvacASohRKfQcC1R5xoAq7NFXVFn-M1eQtI6TGQdecFmX4pU9J-zHlaQosSYY/s428/Screen+Shot+2019-09-19+at+12.45.21+PM.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="428" data-original-width="376" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSA5H8hgNQJaLecEfkV0nsnv_iPkHNcsgzZrh-qAQWFpjxpKePclQmJrcgLZuiiav9zl8xcK3xRvEfccNvacASohRKfQcC1R5xoAq7NFXVFn-M1eQtI6TGQdecFmX4pU9J-zHlaQosSYY/w225-h256/Screen+Shot+2019-09-19+at+12.45.21+PM.png" width="225" /></a></div><div><br /></div>There are a million CI solutions available to engineers these days, but one of
simplest to <i>integrate</i> with a simple Github project is the one built right
into Github. <a href="https://github.com/features/actions">Actions</a>. Here's a
quick process for setting up a Java project with Gradle to run your tests on
every commit for every branch automatically.
<div><br /></div>
<div>
Just drop into your project a file named
<span class="code">.github/workflows/continuous-integration-workflow.yml</span>
with the following contents:
</div>
<pre class="code">name: Build
on: [push]
jobs:
build:
name: "David's Build"
# This job runs on Linux
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: 'Set up JDK 1.8'
uses: actions/setup-java@v1
with:
java-version: 1.8
- name: 'gradlew build'
run: cd ${GITHUB_WORKSPACE} && ./gradlew build
</pre>
If you want to skip the standard gradle configuration and figuring out which gradle files need to be committed for this to work, have a gander at or fork my <a href="https://github.com/ratamacue/java-github-actions">mit-licensed repo here</a>. You can see the passing build <a href="https://github.com/ratamacue/java-github-actions/runs/859915066?check_suite_focus=true">here</a>.David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-89808841819453553402020-07-10T15:36:00.002-07:002020-07-10T15:39:48.194-07:00Programmatically Clearing Caps Lock In LinuxI'm not particularly a fan of caps lock. I typically re-map the caps lock key to
do something more useful. Sometimes caps lock becomes enabled accidentally or
because some application enables it to be "helpful". Here's a little python script I
keep around to disable caps lock from a terminal/shell just in case.
<div><br /></div>
<pre class="code">#!/usr/bin/env python
from ctypes import *
X11 = cdll.LoadLibrary("libX11.so.6")
display = X11.XOpenDisplay(None)
X11.XkbLockModifiers(display, c_uint(0x0100), c_uint(2), c_uint(0))
X11.XCloseDisplay(display)
</pre><a href="https://itectec.com/ubuntu/ubuntu-how-to-turn-off-caps-lock-the-lock-not-the-key-by-command-line/"><font size="2">Source</font></a>David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com1tag:blogger.com,1999:blog-2643923222133320297.post-86469101313479338752020-07-10T13:44:00.001-07:002020-07-10T13:44:32.313-07:00Simple Sneakernet Backups<div>
The easiest way to back up data is to synchronize your data to a coud. For
this, I use Syncthing and various cloud services. But, if I accidentally delete something important and that delete operation is synchronized to my cloud-based backups before I catch it, that data is lost. For true offline and
delete-resistant backups, nothing beats the
<a href="https://en.wikipedia.org/wiki/Sneakernet">sneakernet</a>. The cheapest way to back up a home computer is to purchase a USB hard drive,
plug it in, and copy files. And, you don't need any special software to make
this work. Any POSIX machine with tar and gpg can make an encrypted backup.</div>
<div>First, disable sleep on the device that will be backing up the data.</div>
<div>Next run the following command to back up your data:</div>
<pre class="code">SOURCE=/home/you
DESTINATION=/media/mounteddrive/backup-2020-02-05.tar.gz.gpg
tar czvpf - $SOURCE | gpg --symmetric --cipher-algo aes256 -o $DESTINATION
</pre>
<div>You'll be prompted for a symmetric AES password.</div>
<div><br /></div>
And, to restore:
<pre class="code">SOURCE=/media/mounteddrive/backup-2020-02-05.tar.gz.gpg
DESTINATION=/home/you
(cd $DESTINATION; gpg -d $SOURCE | tar xzvf -)
</pre>
What's great about this is that you are using ubiquitous free open source tools. You know that wherever or whenever you plan to restore this data, you'll be able to.David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-8775694042213134372020-07-10T11:33:00.003-07:002020-07-10T11:42:53.246-07:00Quick And Dirty URL Shortner On Any Site<i>First, I think this is a terrible idea and you should never do this. Second, I did this right here on this site. </i><div><br /></div><div>I wanted to add a URL shortener to my website so when people go davidron.com/something, the user will be redirected to some arbitrary location. I added the following to my 404 page:</div>
<pre class="code"><script language="javascript">
var key = window.location.href.split("/")[3];
var urls={
'ssh':"http://sdf.org/ssh",
'blog':"http://blog.davidron.com",
'emacs':"http://ratamacu.freeshell.org/qe",
}
if(key){
if(urls[key]){
window.location.href=urls[key]
}else{
document.write("'"+key+"' not found :(");
}
}
</script></pre><div>Now, I can go to <a href="http://davidron.com/ssh">davidron.com/ssh</a> to open a terminal or <a href="http://davidron.com/emacs">davidron.com/emacs</a> to download a qe binary.</div><div><br /></div>
This has several disadvantages:<div><ul style="text-align: left;"><li>It's a complete abuse of the 404 page, which should't redirect anywhere.</li><li>It's javascript, and a URL shortener should really use <a href="https://en.wikipedia.org/wiki/URL_redirection#HTTP_status_codes_3xx">HTTP 3xx redirects</a>.</li><li>It's slow because you have to load and render the 404 page, show (part of) the 404 page to the user, run the script on the 404 page.</li><li>Whatever <a href="https://en.wikipedia.org/wiki/Security_through_obscurity">security through obscurity</a> there might be in a cryptic short URL is lost by the fact that I've published the entire database of URLS on my 404 page.</li></ul><div>It also has a couple of advantages:</div></div><div><ul style="text-align: left;"><li>It's stupid easy.</li><li>It works.</li><li>It's easy to edit right in the HTML.</li></ul></div>David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-28721002109771795162020-05-27T15:48:00.002-07:002020-07-09T16:48:33.025-07:00Backing Up Data In A Chrome Extension (Ears Audio Toolkit)<div class="separator" style="clear: both; text-align: center;">
<a
href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfoQ1ItMBt7MIXPUgCE2t1Q49yL8z68t6-Yori5QItzRMBRUOFhGCktFZ5qWQSa39kqvzaXTE3b-MoInWI3eEWkFNJrSRT9s5pxhZKywVdx5A8Zmj61-NEFebxTVH8Zuetn7Sm3lik8Qo/s1600/Screen+Shot+2020-05-27+at+3.35.27+PM.png"
style="margin-left: 1em; margin-right: 1em;"
><img
border="0"
height="456"
src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfoQ1ItMBt7MIXPUgCE2t1Q49yL8z68t6-Yori5QItzRMBRUOFhGCktFZ5qWQSa39kqvzaXTE3b-MoInWI3eEWkFNJrSRT9s5pxhZKywVdx5A8Zmj61-NEFebxTVH8Zuetn7Sm3lik8Qo/s640/Screen+Shot+2020-05-27+at+3.35.27+PM.png"
width="640"
/></a>
</div>
<br />
<br />
If you have a Chrome Extension with some state you'd like to back up for
posterity, then this trick might work for you. In this case, I'm going to back
up Ears Audio Toolkit, an amazing audio Equalizer that allows you to save
equalizer presets for every audio device you have.<br />
<br />
<ul>
<li>Right-click the extension</li>
<li>Choose inspect</li>
<li>
In the left panel, select Application->Local
Storage->chrome-extension://....
</li>
</ul>
From here down.. the directions will depend on the extension. For Ears Audio
Toolkit:<br />
<ul>
<li>find the PRESETS key</li>
<li>Double click the JSON representing the value of PRESETS</li>
<li>COPY/PASTE the giant JSON blob.</li>
</ul>
<div>
Ears Audio Toolkit, offers a $1/month subscription to synchronize your data
for you, but it doesn't really support capturing this data for your own
personal backup and even versioning it in git. That being said, I strongly
support throwing $5 or more at the author to show your support since this
effectively bypasses the tool's business model. Also, since this is not really
a supported trick, there's no guarantee the author won't change the format of
the JSON blob and break this strategy.
</div>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-85890779147639776192017-10-29T17:20:00.003-07:002017-10-29T17:24:18.308-07:00Cox Data Usage ChargesRecently, Cox, a city sanctioned monopoly, has <a href="https://arstechnica.com/information-technology/2017/09/cox-inches-closer-to-nationwide-data-caps-with-move-into-california/">begun charging users</a> for data usage over 1TB in my town, Santa Barbara. While I don't actually believe that charging users for data usage is wrong, I do believe that there is an actual fair market value for data and that Cox is overcharging dramatically - in an environment where it has a government sanctioned monopoly. This, I believe is wrong.<br />
<br />
First, let's break down how I see pricing for data connections working into two groups: Fixed costs and Variable Costs.<br />
<h3>
<b>Fixed Costs</b></h3>
The primary fixed cost for home network connections are the physical connection between the internet provider and the home and the other networking equipment required to make the connection. This would be similar to the electrical lines that connect your home to the grid and the grid infrastructure necessary to transmit electricity from a generator to a home. Similar to the power grid, there is not really an increase in cost here when a customer consumes more power unless the customer has specialized requirements that require some sort of upgrade - extremely uncommon.<br />
<h3>
<b>Variable Costs</b></h3>
Internet service providers have to connect their users to the rest of the Internet. Fundamentally, a residential internet service provider has one or more agreements with other internet service providers to ensure that any computer on the internet can talk to any other computer on the internet. Cox, for instance likely has arrangements with companies such as Level3 and Cogent who can connect Cox to other service providers such as Comcast and Time Warner. Internet companies like Google and Facebook also have agreements with the same providers (Level3 and Cogent). Unlike residential providers, these providers must compete for business and companies who connect to them can distribute traffic across a variety of them, balancing it throughout the day to bring costs down. These costs continue to shrink every year.<br />
<br />
A more detailed writeup of these fixed and variable costs can be found at <a href="https://broadbandnow.com/report/much-data-really-cost-isps/">this very good Broadband Now article</a>.<br />
<h2>
<b>So, what is wrong with what Cox is doing?</b></h2>
Cox's pricing is <a href="https://www.cox.com/residential/pricing.html">here</a>. I find this pricing suspicious given that <a href="https://fiber.google.com/cities/kansascity/plans/">Google Fiber is able to provide substantially better service at a lower cost in Kansas City</a>. But, setting aside the fact that Cox charges more for less to all subscribers. <a href="https://arstechnica.com/information-technology/2017/09/cox-inches-closer-to-nationwide-data-caps-with-move-into-california/">Let's look at what Cox charges</a> to people who use over 1TB of data to, say, restore their computers using an Online backup service. $10/50GB, or roughly $0.20/GB. These costs should reflect only changes in variable costs, and not any fixed costs which are incurred in the over $70/month Cox charges customers after all fees. Google, Microsoft, and Amazon charge between $0.087/GB and $0.12/GB - or roughly 1/2 the cost of what Cox charges, and these providers are including their fixed costs in these charges. Using only 1GB of data in a month with Google/Amazon/Microsoft costs about a dime. Using only 1GB of data in a month with Cox costs over $70. Let's assume that this discrepancy is because Cox has higher fixed costs. That means that those fixed costs are covered any any additional charges due to increased variable costs should be somewhere near market value for network transit fees. Cox is charging at over 10x those fees (more on this later).<br />
<h2>
<b>That's only a dime. What does it matter?</b></h2>
If a home user wants to restore a 3 TB hard disk from a backup, Cox will charge that person an extra $15. That's a lot of money just to restore a backup! This pricing also will act as a mechanism to deter people from streaming video from Cox's competitors, Youtube, Netflix, etc. Once you hit that cap, Cox will charge you $10 for every 16 hours of Netflix you watch in 4k. Netflix charges $12/month for the ability to stream as much 4k video you want and that includes both the cost of paying licensing fees for the movies and paying their internet provider (cogent or level3) to ship the movie over the network. <b>That's just a couple of movies a week before you're paying more money to Cox than you are to Netflix!</b><br />
<h2>
<b>How much should Cox charge then?</b></h2>
Here's what Amazon, Microsoft, and Google charge:<br />
<br />
<ul>
<li><a href="https://aws.amazon.com/ec2/pricing/on-demand/">Amazon AWS</a> $0.09/GB (Fixed Costs Included), </li>
<li><a href="https://azure.microsoft.com/en-us/pricing/details/bandwidth/">Microsoft Azure</a> $0.087/GB (Fixed Costs Included)</li>
<li><a href="https://cloud.google.com/storage/pricing#network-pricing">Google Cloud</a> $0.12/GB (Fixed Costs Included)</li>
<li><a href="http://www.michaelgeist.ca/2011/04/cost-to-send-a-gb/">Actual Interconnect and Transit Costs</a> <$0.01 / GB</li>
<li>Cox $0.20 - $10.00/GB (Fixed Costs an additional $70/month)</li>
</ul>
These providers, including Cox, are likely <a href="http://www.michaelgeist.ca/2011/04/cost-to-send-a-gb/">only paying less than $.01/GB as of 2011</a> in a market where prices have been falling consistently for decades. Without transparency from Cox, it's hard to say what their bandwidth acquisition costs are, and Cox should be allowed to turn a profit. But I believe anything more than $0.03/GB (billed in 1GB blocks, not 50GB blocks) highly suspicious. Again, this reflect a 3x profit margin given what data I have been able to find already. Cox is currently charging between $0.20 and $10.00/GB depending on how much of that 50GB block is used.<br />
<h2>
<b>What should be done?</b></h2>
<div>
I would like for the Santa Barbara city council to publicly work with Cox to correct this abuse on its monopoly privileges in our community. My personal feedback is that the city should take measures to allow changes in policies to our sanctioned monopolies re-open negotiations between the monopoly and the city to ensure that the tax-paying consumer is treated fairly. I would hope other jurisdictions do so as well.</div>
<div>
<br /></div>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com2tag:blogger.com,1999:blog-2643923222133320297.post-72103578089970447292017-09-15T12:48:00.001-07:002020-07-09T16:47:40.942-07:00Git: Resetting a remote branch to a specific hash without a force pushI had a series of commits (including merges) on a branch that I wanted to roll back quickly. I wasn't able to find any help for this problem that didn't either providing git a bunch of help navigating trees with the <span class="code">git revert -m</span> command or using reset and a force push. Here's a trick that's very similar to the reset strategy but retains all of the history:<br />
<br />
<br />
<div class="code">
> git reset --hard THE_HASH_YOU_WANT_TO_RETURN_TO
# That's our good commit
> git rebase -i origin/master
# During the rebase, I squashed all but the top commit to make it one giant commit.
# Gives us a single commit with all of the things that changed since the good commit. That commit was HASH_OF_ALL_CHANGES_SINCE_GOOD_COMMIT
> git revert HASH_OF_ALL_CHANGES_SINCE_GOOD_COMMIT
# That makes a negative commit of that one giant commit named REVERT_OF_ALL_CHANGES_SINCE_GOOD_COMMIT
> git reset --hard origin/master
# Back to reality
> git cherry-pick REVERT_OF_ALL_CHANGES_SINCE_GOOD_COMMIT
# applies a change that reverts all changes since THE_HASH_YOU_WANT_TO_RETURN_TO
</div>
After that, just push!David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-77783122607004067682016-12-31T20:51:00.006-08:002020-07-09T16:49:31.950-07:00Portable Minimal Emacs Clone For Linux<div class="separator" style="clear: both; text-align: center;"></div>
<div class="separator" style="clear: both; text-align: center;">
<a
href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXRXBcZFMEvkrjANLbVrFTZU6XBvQBJJFMROs_7Os3wx8U-ydOddm0WbYdSRTCdcAM0uWjcmYbSJ8LhrLjvpDuozZsHSQfiWZj1N0rIEzbetsEZZmXQkwUiqcCoEpztSqucnmuZfkkf6M/s1600/Screen+Shot+2016-12-31+at+8.38.09+PM.png"
style="margin-left: 1em; margin-right: 1em;"
><img
border="0"
height="249"
src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXRXBcZFMEvkrjANLbVrFTZU6XBvQBJJFMROs_7Os3wx8U-ydOddm0WbYdSRTCdcAM0uWjcmYbSJ8LhrLjvpDuozZsHSQfiWZj1N0rIEzbetsEZZmXQkwUiqcCoEpztSqucnmuZfkkf6M/s320/Screen+Shot+2016-12-31+at+8.38.09+PM.png"
width="320"
/></a>
</div>
<br />
I have the benefit of working in an organization that provides quite a bit of
autonomy, but even still the life of a programming inside of a fortune 500
company means I'm going to be editing text files on machines for which I can't
install software. Sometimes, that means my editor choices are limited to
nano and vim, and I prefer emacs. Typically, I just want an editor that
responds to my muscle memory without thinking too hard. So, a simple
editor that feels like emacs (maybe with a dired mode and a shell) is good
enough. Bonus: it's best if I can just wget the editor and use it.
<a href="http://bellard.org/qemacs/">Enter QEmacs</a>!<br />
<ol>
<li>
Download the <a href="http://bellard.org/qemacs/">latest version</a> from
the website.
</li>
<li>Extract it somewhere.</li>
<li>
use
<span class="code"
>./configure --disable-x11 --disable-xv --disable-html --disable-png</span
>
</li>
<li>Run <span class="code">make</span>.</li>
<li>
Now you'll have a binary called qe. <a
href="http://bellard.org/qemacs/qe-doc.html"
>Read the docs here</a
>.
</li>
</ol>
<div>
If you have any trouble building,
<a href="http://ratamacu.freeshell.org/qe">here's my 272k binary</a>.
Let me know if it works for you.
</div>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-56407512974718983722016-12-17T15:03:00.001-08:002016-12-17T15:03:30.559-08:00Deploying Scala Jar Files into AWS LambdaThis blog post is a text-representation of a lightning talk I gave at a monthly meeting at work. A video recording <a href="https://www.youtube.com/watch?v=lncQl5goS5I">is available here</a>.<br />
<br />
AWS Lambda is a product that allows you to upload code, configure a "trigger" for that code, and run the code in Amazon's infrastructure and be billed in 100ms increments for the compute resources. Read more about it <a href="https://aws.amazon.com/lambda/">here</a>. In this article let's take a look at how we can put some Scala code into AWS.<br />
<h2>
Set Up Environment</h2>
<div>
<ul>
<li>Log into the <a href="https://us-west-1.console.aws.amazon.com/console/home">AWS Management Console</a> and navigate to IAM. </li>
<li>Click Users->Find your username, and click on it (or create one)</li>
<li>Click Security Credentials</li>
<li>Click Create Access Key</li>
<li>Note the Access Key and Secret Access Key, you'll need these.</li>
</ul>
<div>
While you're here, let's make an execution role</div>
</div>
<div>
<ul>
<li>Click Roles</li>
<li>Click Create New Roles</li>
<li>I'm going to use the name "lambda_basic_execution"</li>
<li>Choose "select" next to AWS Lambda</li>
<li>Create a policy</li>
<li>Find and select AWSLambdaBasicExecutionRole</li>
</ul>
<div>
Ok, that's enough AWS stuff. Let's get your laptop environment going...<br />
<br /></div>
<div>
<br /></div>
<div>
<a href="http://docs.aws.amazon.com/cli/latest/userguide/installing.html">Install the AWS Command Line Interface</a>. On a mac with brew, I just typed "brew install awscli".</div>
<div>
<br /></div>
<div>
Now configure the aws command line interface:</div>
<div>
<br /></div>
<div>
<div class="code">
$ aws configure
AWS Access Key ID [****************PEPQ]:
AWS Secret Access Key [****************+iVg]:
Default region name [us-west-1]:
Default output format [json]:
</div>
</div>
<div>
<br />
<h2>
The Code</h2>
</div>
<div>
Next, we need to create a project. Create a normal maven project, but be sure to include the following two additions. A plugin:<br />
<div class="code">
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</div>
And a dependency:<br />
<div class="code">
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.0.0</version>
</dependency>
</div>
<br />
There is a poorly documented requirement that you return a specially formatted Java Map that represents the HTTP response. This is necessary because not all AWS Lambda functions speak HTTP, but this one does. The Map will be serialized using Jackson by Amazon, so we need to return Java POJO's and Java Collections. Since we're writing Scala, things get a little ugly. Here's a little helper package object to make that easier:<br />
<div class="code">
package object gatewayResponse {
def response(response:ResponseCode, body:String, headers:Map[String, String] = Map()):java.util.Map[String, Object] = {
mapAsJavaMap(Map(
"statusCode"-> response.boxed(),
"headers"-> mapAsJavaMap(headers).asInstanceOf[java.util.Map[java.lang.String, java.lang.String]],
"body"-> body)
).asInstanceOf[java.util.Map[java.lang.String, java.lang.Object]]
}
case class ResponseCode(value:Int){
def boxed() = {
Int.box(value)
}
}
val OK = ResponseCode(200)
}
</div>
<br />
And, finally, our lambda function:<br />
<div class="code">
package com.cj.demo
import com.amazonaws.services.lambda.runtime.Context;
import com.cj.lambda.gatewayResponse._
class Demo{
def myHandler(context:Context ) = {
response(OK, "David Says Hello")
}
}
</div>
<br />
<h2>
Deployment</h2>
To build the code, just run:<br />
<div class="code">
mvn package</div>
<br />
And to deploy the lambda to AWS the first time, just run the following:<br />
<div class="code">
$ aws lambda create-function \
> --function-name scala-lambda-demo \
> --runtime java8 \
> --role arn:aws:iam::727586729164:role/lambda_basic_execution \
> --handler com.cj.demo.Demo::myHandler \
> --zip-file fileb://target/scala-lambda-demo-1.0.jar
</div>
<br />
<h2>
Handler</h2>
One last one-time step we need to follow is to create an HTTP endpoint and link it to AWS. Do the following:<br />
<br />
<ul>
<li>Navigate once again to the <a href="https://console.aws.amazon.com/console/home">AWS Console</a>.</li>
<li>Navigate to the API Gateway</li>
<li>Click "Create API"</li>
<li>Give the API a name and click "Create API"</li>
<li>Create a resource, and then inside that resource create a method of type "get".</li>
<li>Integration type=lambda, use lambda proxy integration, make the region match your previously chosen region, and the lambda function you deployed should auto-complete when you start typing its name. If it doesn't auto-complete, it means that you've either chosen the wrong region or that something went wrong when you used the aws command to create the function.</li>
</ul>
<div>
<span id="goog_1520677261"></span><span id="goog_1520677262"></span><br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSC2-uWw0NOMa3v0tHCAc1frDTY6U18genINHMXmv3YsJ9RlAhcYGkD7R0kNLcM3LSAGrcHJRlLu_34Orj4cPnubwlB74MI2NdNNH9HHTaw7kdjdSMY2-YN2yfDgQ9fldpgZ0xvnNDqv4/s1600/api07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="185" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSC2-uWw0NOMa3v0tHCAc1frDTY6U18genINHMXmv3YsJ9RlAhcYGkD7R0kNLcM3LSAGrcHJRlLu_34Orj4cPnubwlB74MI2NdNNH9HHTaw7kdjdSMY2-YN2yfDgQ9fldpgZ0xvnNDqv4/s320/api07.png" width="320" /></a></div>
<br />
Now, click actions->deploy API and you should see the following screen:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdMN2HwPsJxBmt-zbZ6jjjcs6OpDviyx2DehiqrYAyCx6ahZuevRHEjRGpzDCsLAW1HIMYocVZT4ti0oG08rgQ94Q2RByq8PcbtWFAAy6WSi0hUnHvHBjIwxo5hP8ECDojmDYLWV5UMn8/s1600/api09.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdMN2HwPsJxBmt-zbZ6jjjcs6OpDviyx2DehiqrYAyCx6ahZuevRHEjRGpzDCsLAW1HIMYocVZT4ti0oG08rgQ94Q2RByq8PcbtWFAAy6WSi0hUnHvHBjIwxo5hP8ECDojmDYLWV5UMn8/s320/api09.png" width="320" /></a></div>
<br />
On the next screen, you will be given your HTTP endpoint:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNxJwt7816VbWOUGqKUakCfMB1bJBxTF1RyRWV4HhnusosbP4g7pa9FM3MXJdj9GtY4Eg14krZlfyu5YfLyLypeyJ96zph91xoTdBaGR9gcWaGRwTtXMO8EWs5_nMdeeNXCCQfjFptV2c/s1600/api10.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="74" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNxJwt7816VbWOUGqKUakCfMB1bJBxTF1RyRWV4HhnusosbP4g7pa9FM3MXJdj9GtY4Eg14krZlfyu5YfLyLypeyJ96zph91xoTdBaGR9gcWaGRwTtXMO8EWs5_nMdeeNXCCQfjFptV2c/s320/api10.png" width="320" /></a></div>
<br />
Just take that "Invoke URL" and append the resource name to the end, and it should take you to the results of running your Scala code. If you see <span style="white-space: pre-wrap;">{"message":"Missing Authentication Token"}, don't forget to add the name of your resource to the URL!</span><br />
<span style="white-space: pre-wrap;"><br /></span>
<span style="white-space: pre-wrap;">That's it. You've create an HTTP endpoint and linked a "hello world" jar in AWS Lambda and written in Scala to that HTTP endpoint. If you want to add features to your application, just repeat the mvn package step and the use the following code to update the lambda function:</span><br />
<div class="code">
aws lambda update-function-code \
--function-name scala-lambda-demo \
--zip-file fileb://target/scala-lambda-demo-1.0.jar
</div>
<br />
<br />
Happy Coding!<br />
<br />
<br /></div>
</div>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com1tag:blogger.com,1999:blog-2643923222133320297.post-23421226669507729802016-11-11T09:19:00.001-08:002020-07-09T16:50:50.997-07:00Deploying Node JS Apps to Amazon Lambda In 5 Minutes:AWS Lambda is a bit of a game-changer for developers who don't have the means, or the desire to mess with devops, even in the simplified world of containers. We just write some code, put it somewhere. When a request comes in, resources are automatically allocated to running that code, the code is executed, and then the resources are deallocated. In Lambda, we are only billed when our software is actually run. The costs can be almost free to host websites with very little traffic. Unfortunately, these things are not easy to start up and configure using the tools Amazon provides. But, and open source utility called <a href="https://serverless.com/">serverless</a> has makes it really easy!<br />
<br />
Here's how:<br />
<ul>
<li>Signup / Log into Amazon AWS</li>
<li>Create an IAM AWS Access Key ID and AWS Secret Access Key <a href="https://console.aws.amazon.com/iam/home">here</a>. You'll need these in a second.</li>
<li>npm install -g serverless</li>
</ul>
Run the following on the command line:
<br />
<div class="code">
$ export AWS_ACCESS_KEY_ID=YOUR_KEY_ID_HERE
$ export AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_KEY_HERE
$ mkdir teset&& cd test
$ serverless create --template aws-nodejs
</div>
<br />
<div>
Edit the serverless.yaml file and uncomment the following lines </div>
<div>
<div class="code">
events:
- http:
path: users/create
method: get
</div>
<div>
<br /></div>
<div>
And, run:</div>
<div class="code">
serverless deploy</div>
<div>
After a few seconds, your server's URL should be printed. Just edit <span class="code">handler.js</span> to add functionality to your application.<br />
<br />
<br /></div>
</div>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-133187389304490302016-10-25T08:31:00.003-07:002016-10-25T08:33:26.690-07:00Upgrading a Netgear Orbi - Solution to "This firmware file is incorrect!"<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfNgVh1qDZJxT2dAkgYRLHIoWWmAeDZLoU8ZuUMZC2Ik0Pl5EmZNszNlfkt6WzuQA7zZwiNU6w3y34bIRFh7xEeogp_Jrt48QItgvHIvU4AS1A4RgyzunMe1H7U36PADLNsAS1qAvpyO4/s1600/orbi_transparent.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="285" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfNgVh1qDZJxT2dAkgYRLHIoWWmAeDZLoU8ZuUMZC2Ik0Pl5EmZNszNlfkt6WzuQA7zZwiNU6w3y34bIRFh7xEeogp_Jrt48QItgvHIvU4AS1A4RgyzunMe1H7U36PADLNsAS1qAvpyO4/s320/orbi_transparent.png" width="320" /></a></div>
<br />
<br />
I just got a brand new Netgear Orbi router with extender (called a satellite) this weekend. When navigating to the firmware upgrade page, the UI indicated that I had an upgrade available for the base station, but the satellite unit was stuck in "please wait" mode. So, I decided to try upgrading the firmware manually. I read online that you should always upgrade the satellite first, so <a href="https://www.netgear.com/support/product/RBK50#download">I downloaded the firmware for both</a>, logged into my router, and attempted to upgrade. I was then presented with the following error message when attempting to load the firmware for the RBS50 (the satellite):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjXih5ByUV9C2ZX3ugvjwK_wRD30EumyK2TqhALwkYRLLACKT_L2fVKiBdIY-lo8zpqfc7fLXCfipxXS-CxoLSPHFcvnx4NT9S93D-mTw2nBDeh17hJfBljynCt05qhhNWZfG4bRTHWH0/s1600/Screen+Shot+2016-10-23+at+3.40.23+PM.png" imageanchor="1"><img alt="The firmware file is incorrect! Please get the firmware file again and make sure it is correct firmware for this product." border="0" height="73" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjXih5ByUV9C2ZX3ugvjwK_wRD30EumyK2TqhALwkYRLLACKT_L2fVKiBdIY-lo8zpqfc7fLXCfipxXS-CxoLSPHFcvnx4NT9S93D-mTw2nBDeh17hJfBljynCt05qhhNWZfG4bRTHWH0/s400/Screen+Shot+2016-10-23+at+3.40.23+PM.png" title="" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
After a little poking around, I realized that my satellite appears in the list of "attached devices" and when I connect directly to the satellite, there is a different interface for upgrading the firmware of that device. So, I used that interface to upgrade just the satellite (the RBS50). Once finished, I could upgrade the firmware of the Orbi router using the automatic upgrade feature.</div>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com3tag:blogger.com,1999:blog-2643923222133320297.post-68167130354447628942016-06-30T10:46:00.002-07:002016-06-30T10:46:57.697-07:00Why I'm Sticking With EvernoteI have been a freeloader with Evernote for since 2012 when <a href="https://www.google.com/googlenotebook/faq.html">Google Notebook shut down</a> until this year. A few months ago, Evernote forced me to upgrade to a $25/year plus account to re-enable the ability to send notes to myself via email. I paid up for a single year and deferred really thinking about switching to something cheaper. It turns out that I don't use the email-to-self-feature much so shortly after paying, I decided to go back to freeloading once the year was up. Then, the news hit that Evernote was going to <a href="http://arstechnica.com/gadgets/2016/06/evernote-limits-free-tier-to-two-devices-raises-prices-40/">limit the free tier to two devices and raise the price from $25 to $35 for the lowest paid tier</a>. I have 6 devices (counting two Chromebooks that have the Evernote Android app installed). My Evernote notebook is about 1/2 Gigabyte today and growing.<br />
<br />
<div class="code">
~/Library/Containers/com.evernote.Evernote$ du -cksh
469M .
469M total
</div>
<br />
$35/year seems like a lot for an app, but self-hosting only the storage would exceed that cost on any platform that supported running my own applications. I wouldn't self-host a server as important as a repository of my personal knowledge.<br />
<br />
<b>What about Google Keep?</b><br />
I was burned pretty hard by Google when Google killed <a href="https://www.google.com/googlenotebook/faq.html">Google Notebook</a> and then launched Google Keep a few years later and no ability to transfer my data, so I want to keep my notes with a company that takes my long-term data seriously. I still don't think it wise to store my notes with Google even for free. Even though it does everything I want from a note-taking app. Evernote even alludes to this fact in the marketing material around this change.<br />
<br />
<blockquote class="tr_bq">
Evernote isn’t a vast corporation, and note-taking isn’t a sideline for us. It’s what we do, and we strive to do it better than anyone else. We hope you’ll continue to capture your thoughts and develop your ideas with us. [<a href="https://blog.evernote.com/blog/2016/06/28/changes-to-evernotes-pricing-plans/">source</a>]</blockquote>
<br />
<br />
<b>Microsoft OneNote?</b><br />
I've been burned so many times by Microsoft's embrace and extend, it's going to take a long time before I can be sure Microsoft would actually support all of the platforms I want to be able to access my data. I'd rather trust Google.<br />
<br />
<b>SimpleNote, Todo.txt, Other?</b><br />
I really like the ability to take a picture of my hand-written notes and then search for them. I'm willing to pay for this feature. I'm not aware of another cloud-based service with this feature other than these.David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com3tag:blogger.com,1999:blog-2643923222133320297.post-85686729052742763162015-08-02T15:10:00.000-07:002015-08-02T15:35:41.370-07:00Book Review: Inspired by Marty Cagan<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7YqU6ZPf0u2bDhE_U8AW7EQq9vVkW7lQek0-Jv5QRmeF8_kVPUKicvwONKwT7TBuICJteOCcgMxhztzXonOfrLlUmFYJu8AEH4AI-9MV7MMNJ6TDNFO4WOR8956Osf26lhiB5T-7QaXQ/s1600/inspiredlg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7YqU6ZPf0u2bDhE_U8AW7EQq9vVkW7lQek0-Jv5QRmeF8_kVPUKicvwONKwT7TBuICJteOCcgMxhztzXonOfrLlUmFYJu8AEH4AI-9MV7MMNJ6TDNFO4WOR8956Osf26lhiB5T-7QaXQ/s1600/inspiredlg.png" /></a></div>
<br />
This book is a collection of blog posts cobbled together into a book that has no real thesis and often contradicts itself. This contradiction is dangerous because it emphasizes putting creative control of the product in the hands of those with titles containing "UX" and "Product" and devalues the creative and innovative capabilities of the rest of the orginization. Due to this centralization of roles and internally inconsistent nature, it could be treated like a religious text where the high priests of Product can cherry-pick whatever ideas they want to build any product they imagine so long as they can get customer to say it makes them happy.<br />
<br />
The advice in this book isn't bad, per se, it just isn't great. The author, Marty Cagan, takes the waterfall model of product design and extends it into product, defining a dozen or so different specific job titles and outlining when those titles are necessary in what phases of a project. Unfortunately, in a highly creative and collaborative environment, this advice could potentially slow down development of a product and stifle the creativity of people who can do more than exist in their pre-defined roles. Worse, he doesn't appear to understand the creativity of engineering and what kind of innovation takes place in the technical side of the product development lifecycle.<br />
<br />
Cagan provides some excellent advice. Namely:<br />
<div>
<ul>
<li>Product owner sees the product from start to finish and participates in all user testing.</li>
<li>The Engineering Lead is involved in the entire process to provide information about what is possible and how hard it is.</li>
<li>Focus groups are bad because customers don't know what they want.</li>
<li>Sometimes the customer asks for x and the best thing for the business is to give them y.</li>
<li>Don't charge customers who are in the beta.</li>
<li>Don't lead the witness.</li>
<li>Don't confuse customer requirements with product requirements.</li>
</ul>
<div>
I've personally seen the benefit of following these rules and have seen that NOT following these rules can lead to direct negative consequences. </div>
<div>
<br /></div>
<div>
Let's take a look at the closest thing I could find to a short thesis in the way of a summary of how to better run a startup. It's basically a copy/past of <a href="http://www.svpg.com/startup-product-management/">this blog post</a> into the book.</div>
</div>
<div>
<blockquote>
<i>Here’s how it typically works. Someone with an idea gets some seed funding, and the first thing he does is hire some engineers to start building something. The founder will have some definite ideas on what he wants, and he’ll typically act as product manager and often product designer, and the engineering team will then go from there. The company is typically operating in <b>“stealth mode” so there’s little customer interaction</b>. It takes <b>much longer for the engineering team to build something that originally thought</b> because the requirements and the design are being figured out on the fly.<br />After <b>6 months or so</b>, the engineers have things in sort of an alpha or beta state, and that’s when they first show the product around. Things rarely go well in this first viewing, and the team starts scrambling. <b>The run-rate is high because there’s now an engineering team building this thing as fast as they can</b>, so the money is running out, and the product isn’t there. Maybe the company gets more funding and a chance to get the product right, but often they don’t. Many startups try to get more time by outsourcing the engineering to a low-cost off-shore firm, but it’s essentially the same process with the same problems.<br />Here’s a very different approach to new product creation, one that costs dramatically less and is much more likely to yield the results you want. The founder hires a product manager, a product designer, and a prototyper. Sometimes the designer can also serve as prototyper, and sometimes the founder can serve as the product manager, but one way or another, you have these three functions lined up - product management, product design, and prototyping – and the team starts a process of very rapid product design and iteration.<br />I describe this process in detail in “How To Write a Good PRD,” but there are two keys: 1) the idea is to create a <b>high-fidelity prototype</b> that mimics the eventual user experience – it’s just fine if the back-end processing and data is all fake; and 2) you need to validate this product design with real target users.<br />In this model, it is normal to create literally dozens of versions of the prototype – it will evolve daily, sometimes with minor refinements and sometimes with very significant changes. But the point is that with each iteration you are getting closer to identifying a winning product. <b>This process typically takes between 3 weeks and 2 months</b>, but at the end of the process, you have a) identified a product that you have validated with the target market; b) a very rich prototype that serves as a living spec for the engineering team to build from; and c) you now understand at a much greater degree what you’re getting into and what you’ll need to do to succeed.<br />Now when you bring on an engineering team, they’ll start off with a tremendous advantage – <b>a clear understanding of the product they need to build and a stable spec</b> – and you will find that the team can produce a quality implementation much faster than they would otherwise.<br /><b>This model of prototype-based product experimentation is increasingly becoming the norm in the manufacturing world, but for some reason this hasn’t taken off in software. I </b>think we’re such an engineering-driven culture that we just naturally start there. But any startup has to realize that everything starts with the right product – so the first order of business is to figure out what that is before burning through $500K or more in seed funding.<br />I believe this model applies beyond startups to much larger companies as well. The difference is that bigger companies are generally able to underwrite the several iterations it takes to get to a useful product, but startups often can’t. But there’s no reason for the inefficiencies that larger companies regularly endure.</i></blockquote>
</div>
<div>
Allow me to nitpick this idea.</div>
<div>
<ol>
<li><b>Stealth mode is bad</b>. Cagan and I agree that stealth mode deprives the team of feedback, but another solution to this problem is to just not do stealth mode.</li>
<li><b>It takes longer to develop than originally thought</b>. This is a widespread problem related to an engineering team's poor estimation skills. Skills can be improved. A typical Agile engineering team is bad at this for the first few iterations of its existence but can quickly refine those estimates as it retrospects on those estimates after each iteration. My engineering team is typically off estimate by roughly 10% or less, and is quick to refine estimates as we discover new things. Sometimes we discover an open source tool that lets us re-use rather than rebuild and the estimate drops, and sometimes we learn from our users that we are building the wrong thing and must adjust accordingly. But, I argue that the total time to market is faster this way than prototyping without engineers first and then bringing in engineers - something I elaborate more on shortly.</li>
<li><b>Six months to the first beta</b>. Cagan and I agree that spending six months without any user feedback is bad. But, another solution is to just develop the most important thing first and start the user testing against the actual product rather than a prototype. My engineering team typically has a beta of the core functionality of a new product ready for user testing after two to four weeks. I'll get into why this is optimal shortly.</li>
<li><b>High Run Rate</b>. I suspect that the run-rate wouldn't be that much higher to prototype using real engineers building a real product rather than paying for prototypers to build something that will be thrown out when the engineers show up and tell you the prototype needs to be re-designed because it didn't take into account technical costs. In fact, I would bet that the total cost to get to market with Cagan's solution to this problem would be much higher than if a real engineer worked on the prototype because UX innovation would be taking place in parallel with engineering innovation rather than delaying the engineering innovation until after the UX design was considered "done" (something that's not totally possible until the engineering innovation has taken place anyway).</li>
<li><b>High Fidelity Prototype</b>. In the Agile world it would not be easy to create a high-fidelity prototype without consulting with the engineers about what is technically possible for the first version. At best, this prototype is a close approximation to the perfect world given unlimited engineering budget. Remember, given a complete waterfall-style specification, you're going to spend far more than 80% of your engineering effort on far less than 20% of the product and without engineers present for the prototyping phase, small changes to that 20% could have a dramatic effect on that 80% of the budget. I'm not arguing against the high-fidelity prototype. I'm arguing that the engineers should be building that prototype rapidly using test-driven development, and with very short (sub-weekly) iterations and frequent (sub-weekly) releases.</li>
<li><b>3 weeks to two months</b>. In the solution I propose, after those 3-8 weeks, you not only have yourself a high-fidelity prototype, but you have yourself a few engineers who are intimately aware of the customer's needs, and a suite of tests that document the specification of the user-facing part of the application. Swapping out the fake backend with something that contains security and persistence is simply the next phase of the already-in-progresss project.</li>
<li><b>Clear understanding and a stable spec.</b> If this were a waterfall-style shop, Cagan makes a good point. If you want the engineers to passively consume the specification and build a system that meets that specification, Cagan proposes a great solution to the outlined problem. But, if this system were being built by true software craftsmen, then they would look at the specification and begin to suggest small changes that could drastically reduce the cost of the system. Those proposals would need to be run through the same UX design process and suddenly that 3-8 weeks spent in pre-planning starts not to be so valuable. What's missing is that <b>the UX and Product team do not have a clear understanding of what makes sense to build technically</b> until the engineers receive and respond to that specification. And, when that starts to happen, that specification will no longer be stable.</li>
<li><b>Prototype-based experimentation hasn't taken off in software</b>. The fact that the author has no idea why the idea hasn't taken off in software strikes directly at the heart of what is wrong with the thesis of this book. The author has little to no understanding of the state of the art in software engineering processes. Lean Startup, Scrum, Kanban, and the host of other Agile philosophies have their roots in the manufacturing world and describe themselves as ways to create small "experiments" or continuous evolution of a prototype to create software. Software engineering is exactly the thing Cagan is proposing we do, but he's proposing we do it without the software engineers. I suspect this has something to do with the companies he has worked for: HP, AOL, Netscape, eBay - companies that aren't known for their ability to rapidly innovate through software engineering. Earlier in the book he advises not to make the actual product the prototype (copied from <a href="http://svpg.com/flavors-of-prototypes/">this blog post</a>) because costs to build test and deploy are too high, but another solution to this problem is to crush those costs down so this is no longer a barrier - something that we have done fairly well at the company I am working for now.</li>
</ol>
<div>
I think #7 and #8 are the most important problems with Cagan's thesis. First, it's unfounded and inaccurate, and second, it's a hypothesis that the author himself hasn't appeared to have tested (or at least he doesn't state he has any experience testing this theory). Finally, it appears to contradict advice he gave earlier in the book and taken from a different <a href="http://www.svpg.com/product-fail/">blog post</a>:</div>
</div>
<blockquote class="tr_bq">
<i>Maybe the biggest missed opportunity in [old-school product management], is the fact that engineering gets brought in way too late. We say if you’re just using your engineers to code, you’re only getting about half their value. The little secret in product is that engineers are typically the best single source of innovation, yet they are not even invited to the party in this process.</i></blockquote>
<div>
Which I whole-hardheartedly agree with, speaking as a self-promoting engineer.</div>
<div>
<br /></div>
<div>
Cagan measures the success of a product based on customer happiness, but he never shows how to measure happiness. In fact, he never shows how to measure anything. The word data-driven doesn't appear in the book once. Granted, a very important way to measure the success of a product is to simply ask customers if they are happy, but even Cagan conceeds that simply asking customers isn't the best way to get requirements from them. Early on in Google's life, they tried to sell their search technology to Yahoo. Yahoo didn't want to pay for it because search was considered a solved problem and users already loved Yahoo. Google had measurably better technology, but no users loved Google, so Yahoo passed. Upon launching, Google didn't simply ask users if Google Search made them happy. It measured every time people navigated to the second page of search results and considered that click a "failure" of the search. Google set a goal to reduce the average number of "next" clicks per search every year. This allowed Google to innovate in a scientifically measurable way and stay ahead of the competition even though users already loved Yahoo. True innovation isn't simply putting a fresh coat of paint on a mediocre product or giving the user a horse that's easier to ride. True innovation happens when creative people solve an important problem using a solution everybody else thinks is impossible or dumb. The best innovation, rarely developed through UX, <a href="https://en.wikipedia.org/wiki/Disruptive_innovation">is disruptive</a>. The first automobile was considered slow, difficult to operate, and expensive to maintain compared to the horse.</div>
<div>
<br /></div>
<div>
Cagan's approach is a great way to take a waterfall organization building mediocre products to another level. But, he isolates the creative process to the few people with the right job title and gives them the tools of subjective psychology-driven decision-making. The customer may end up loving your product, but it won't necessarily be the best product you could have built.</div>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-645255548895737702015-02-11T20:19:00.000-08:002015-02-11T20:36:24.460-08:00Functional Programming and Domain Driven DesignI was in a meeting full of senior engineers a few months ago, and we were discussing the state of our Javascript which lead to the topic of Functional Programming. I mentioned that I thought we were spending a lot of time writing packages of functions that were fairly composable, but we were having trouble reducing duplication because our code was organized by project, specific to the problem being solved at any given time, with no concept of problem domains. I suggested we apply the principals of Domain-Driven Design to raise the level of abstraction and reduce duplication.<br />
<br />
Unanimous laughter followed. Buried inside the laughter was the comment, "DDD? What is this, 2008?"<br />
<br />
I assumed that I was missing something and that the concepts of DDD were incompatible with FP because all of the core "things" described in DDD have the word "object" in their name. But, while DDD as described by <a href="http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215">Eric Evans</a> is implemented in an Object Oriented language in his book, it isn't implied by the fundamental approach. <br />
<br />
Then, I stumbled on <a href="https://skillsmatter.com/skillscasts/4971-domain-driven-design-with-scott-wlaschin">this video</a> which hypothesizes that DDD is even better with FP. The speaker asserts that there isn't any reason Functional Programs can't also contain a "ubiquitous language", "bounded contexts", etc.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP1p3kuckn3fYOoE1BTcneBJyHrAImbwi8EuAWaXN0g41UjR3H92wNKNa7_hCaHpLDJjVi_dgfyAU7XQgtusY2d7JL51bBtsHbmARr2jD2onOPiCbTxRA4ZfmCqdQHqamKgjRVnmb8ghk/s1600/Screenshot+2015-02-11+at+7.24.09+PM+-+Edited.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP1p3kuckn3fYOoE1BTcneBJyHrAImbwi8EuAWaXN0g41UjR3H92wNKNa7_hCaHpLDJjVi_dgfyAU7XQgtusY2d7JL51bBtsHbmARr2jD2onOPiCbTxRA4ZfmCqdQHqamKgjRVnmb8ghk/s1600/Screenshot+2015-02-11+at+7.24.09+PM+-+Edited.png" height="222" width="320" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbvWRE1ICtx92JKv1quBAdp7qxbV2FY7uqZVc_F_37iogyrNZBhHkbNobFyLi8hoO-kFfD_NoeMvCyYt0Pf8WQgI_prWrqsJwKOpl9Nd_4YvTiHO44N-OWVj4RRAjTLkjeI7w9ZAmmLbQ/s1600/Screenshot+2015-02-11+at+7.13.21+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbvWRE1ICtx92JKv1quBAdp7qxbV2FY7uqZVc_F_37iogyrNZBhHkbNobFyLi8hoO-kFfD_NoeMvCyYt0Pf8WQgI_prWrqsJwKOpl9Nd_4YvTiHO44N-OWVj4RRAjTLkjeI7w9ZAmmLbQ/s1600/Screenshot+2015-02-11+at+7.13.21+PM.png" height="171" width="320" /></a></div>
<br />
He doesn't exactly prove the hypothesis. But I assert that DDD, by my definition, is compatible with FP. So, below is my attempt to map each DDD buzzword to something that is easy to implement in a Functional program.<br />
<br />
<b>Value Objects</b> - Immutable Values are all Value Object in FP.<br />
<b>Entities</b> - Giving a Value Object a specific ID makes it an Entity. Easy in FP.<br />
<b>Aggregate</b> - A collection. Functional program is full of these.<br />
<b>Aggregate Root</b> - A function that maps over an Aggregate. I suppose we can use types to protect our Aggregates from external mutation, but this wouldn't be a necessary if we kept all of our data immutable.<br />
<b>Domain Event</b> - Fans of Big Data will likely find the concept of <a href="http://en.wikipedia.org/wiki/MapReduce">mapping and reducing</a> over <a href="http://en.wikipedia.org/wiki/Raw_data">raw data</a>, a very Functional idiom.<br />
<b>Service</b> - Functions that operate on data could be considered services.<br />
<b>Repository</b> - In the process of isolating non-pure database access from pure functional code, programmers often use some sort of repository pattern to abstract the database access behind a composable API.<br />
<br />
What does this mean? It means that we OO expats, shouldn't throw out everything we were doing to organize our domain. We should be taking the most important and useful OO tools with us as we assimilate into our new FP world and only deviate once we have proven a superior idiom is available to solve our problems.David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com7tag:blogger.com,1999:blog-2643923222133320297.post-89322439858237121492013-11-03T14:41:00.000-08:002013-11-03T14:41:18.059-08:00Amazon Kindle: Resetting "Last Page Read"<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhg7gFd0IfeN05LvroHGHLr4aKmhyphenhyphenLmnEuwFA6rdzxYKMTKYDe2IMW1ew0KSUrX7EO3cpnkYYNPFGMCTABldAMj1s12V4wm87csD3emIUU6IEniKQ6GL4QrNyxrpSzjlCg9Y7CKcQs60KU/s1600/kindle.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhg7gFd0IfeN05LvroHGHLr4aKmhyphenhyphenLmnEuwFA6rdzxYKMTKYDe2IMW1ew0KSUrX7EO3cpnkYYNPFGMCTABldAMj1s12V4wm87csD3emIUU6IEniKQ6GL4QrNyxrpSzjlCg9Y7CKcQs60KU/s320/kindle.jpeg" width="320" /></a></div>
<br />
Somehow, a book that I had just started reading on my kindle jumped to the end. From that time, all of my Kindle devices showed that the "farthest point read" was that point, and whispersync was effectively broken for that book. Here's how I was able to reset the "last page read" for that book:<br />
<ol>
<li>Go to the device where you have the book at the location where you
want it to be set (either at the beginning of the book or at a given
page) </li>
<li>On http://www.Amazon.com click My Account > Digital Content > Manage Your Kindle. </li>
<li>Click "Whispersync Device Synchronization". </li>
<li>Turn Synchronization Off</li>
<li>Go back to your Kindle device and "Sync to furthest page read".</li>
<li>Remove the book from that device.</li>
<li>Re-download the book from you "cloud". On the kindle, go home and tap "cloud" to find the book.</li>
<li>Open up the book.</li>
<li>Go back to "Whispersync Device Synchronization" and turn it back on</li>
</ol>
<br />
<span style="font-size: xx-small;"><a href="http://www.amazon.com/forum/kindle?_encoding=UTF8&cdForum=Fx1D7SY3BVSESG&cdThread=TxN2ZHBVF36G1K">Source</a>. </span>David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-39626739178274058912013-10-29T14:15:00.002-07:002013-10-29T14:15:11.745-07:00Google Is Slowly Closing AndroidArs Technica recently <a href="http://arstechnica.com/gadgets/2013/10/googles-iron-grip-on-android-controlling-open-source-by-any-means-necessary/">posted an article</a>
that outlines various ways Google is closing. I think the majority of
the article is spot-on, and for the first time, I am seriously
disappointed in the Android Open Source Project leadership for allowing
this to happen.<br />
<br />
If this policy of leaving the OSS
project to stagnate continues, I will likely investigate an alternative
to Android as my primary operating system. My hope is that projects
like Cyanogenmod can <a href="http://www.cyanogenmod.org/blog/big-android-bbq-2013">take up the leadership role</a>, or at least threaten Google, such that Android doesn't become completely closed source.David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-16028612662600743922013-10-11T21:24:00.003-07:002014-02-01T21:21:53.223-08:00Fixing Lag On The Barnes And Noble Nook HD+ Running Cyanogenmod 10.2<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEishsiRaiqdSDk5Tabk0lUv9UK6YHbI36DCfDkpecCb130UyAx-HmT_AJlGSCMY3zjaiJXTWLaaI2cZyiLvYMOaRqjjJ6eaNuxYOZ46mIL4pW60RzQCsFk6hNANbUn0FKKg0CV6PCnqjd4/s1600/maxresdefault.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEishsiRaiqdSDk5Tabk0lUv9UK6YHbI36DCfDkpecCb130UyAx-HmT_AJlGSCMY3zjaiJXTWLaaI2cZyiLvYMOaRqjjJ6eaNuxYOZ46mIL4pW60RzQCsFk6hNANbUn0FKKg0CV6PCnqjd4/s320/maxresdefault.jpg" height="180" width="320" /></a></div>
<br />
<h1>UPDATE: CyanogenMod 10.2.1 has trim enabled. Don't follow these directions. Just install 10.2.1.</h1><br /><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
I have a Nook HD+, and the only way to love this tablet is to <a href="http://wiki.cyanogenmod.org/w/Install_CM_for_ovation">put CyanogenMod on it</a>, because that turns it into a very-near stock android tablet - no special Barnes & Noble stuff, and it gets really really fast.<br />
<br />
This tablet has one huge flaw. The memory controller has a <a href="http://wiki.cyanogenmod.org/w/EMMC_Bugs#MAG2GA_TRIM_bug">tendency to fail</a> when the system runs a trim operation, something that all modern Android devices have that helps keep them from slowly starting to lag more and more over time. <a href="http://pocketnow.com/2013/08/06/android-trim-support">Here is a decent description</a> of what trim is and why you want to have it on this device. The solution until now, even on the stock HD+ from what I can tell, is to just disable trim completely. So, what happens is that, regardless of if you are running the stock firmware from Barnes & Noble, or if you have upgraded this tablet to CyanogenMod 10.2, this tablet will progressively get slower and slower until it is nearly unusable.<br />
<br />
But, there is a solution, if you are willing to to take a risk. Apparently, the Nexus 7 has the same memory controller as the Nook HD+, and Google has <a href="https://github.com/CyanogenMod/android_kernel_asus_grouper/commit/3de09ec8e73b7352c37f50a40d696f20be454b8b">patched Android to fix the bug</a>. There hasn't yet been enough confirmation that this is fixed, so it isn't in any mainstream kernels that I can find. But, if you are running CyanogenMod 10.2, you can flash <a href="http://forum.xda-developers.com/showpost.php?p=44947144&postcount=353">this kernel</a> right over your CyanogenMod 10.2 nightly and trim will be enabled. Once it's enabled, you can run <a href="https://play.google.com/store/apps/details?id=com.grilledmonkey.lagfix&hl=en">this app</a> to fix the lag. After running the app once, it'll probably be a while before it needs to be run again, so in case there IS a bug, just re-flash CyanogenMod Nightly to put the original kernel back.<br />
<br />
<br />
So, to recap the steps to make the Nook HD+ not lag when running CyanogenMod 10.2:<br />
<ol>
<li>Download <a href="http://forum.xda-developers.com/showpost.php?p=44947144&postcount=353">the patched kernel zip</a>.</li>
<li>Install the <a href="https://play.google.com/store/apps/details?id=com.grilledmonkey.lagfix&hl=en">lagfix app</a>.</li>
<li>Use CyanogenMod updater to get the latest nightly ready to go. </li>
<li>Reboot into ClockworkMod. </li>
<li>Flash the new kernel and reboot.</li>
<li>Run the lagfix app for all partitions.</li>
<li>Reboot into ClockworkMod again.</li>
<li>Flash the latest nightly zip (which will swap in a kernel without the fix for protection).</li>
<li>Enjoy Fast Tablet. </li>
</ol>
<br />
<br />David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com20tag:blogger.com,1999:blog-2643923222133320297.post-31488231519219581412012-11-07T20:21:00.000-08:002012-12-01T09:27:27.291-08:00An Anecdotal Review of Monoprice's In-Ear HeadphonesI like good sound, but I don't like to pay a ton of money for it. So, I look for headphones that are inexpensive but that I can make sound good. "Good" to me means that I can hear all of the instruments in a fairly complex piece of music and that have enough frequency response that I can get decent highs and lows (even if it takes some equalization). As a drummer, highs and lows are most important to me, especially a nice dull thump from the bass drum without disturbing the rest of the instruments.<br />
<br />
In other words, I prefer the drivers to not be overwhelmed by sudden peaks of energy. For me, this is most obvious in the presence of low frequency sounds just after the bass drum is hit. If, for instance, during a bass drum kick the bass guitar drops out suddenly and just as suddenly restores or, in softer music, the normal echo of the inside of the bass drum after a kick is not present, it's an indication that the driver has been overwhelmed. Doing a test like this with your favorite music is a great way to evaluate a sub woofer for a theater system by the way.<br />
<br />
Finally, I am annoyed by added compression that brings the hard-to-reproduce frequencies into an easier-to-reproduce range because it makes it harder to make out the difference between a high hat roll or the strum of an acoustic guitar (Bose). This probably won't be a problem in headphones under $20.<br />
<br />
I'm not an audiophile. I don't have a golden ear. I don't have fancy equipment to objectively assess my findings. The only expertise I have is that I am a musician using music I am extremely familiar with. So, obviously, this is a completely subjective analysis.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFkMaUqBW-muclYYKp9-XkIhRf_d3tEO0OChenj8eRUCJoGyO7xDDvL5RRpK_Zsh_vQLYEt-a2fqMhm7zRQ6_6fV7Fg6pjPSYBbcaLqBxbIv6PekK9OviNOZs0qxp7z_jYrsQg2LUOv2k/s1600/Monoprice8320.jpeg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFkMaUqBW-muclYYKp9-XkIhRf_d3tEO0OChenj8eRUCJoGyO7xDDvL5RRpK_Zsh_vQLYEt-a2fqMhm7zRQ6_6fV7Fg6pjPSYBbcaLqBxbIv6PekK9OviNOZs0qxp7z_jYrsQg2LUOv2k/s320/Monoprice8320.jpeg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Monoprice 8320 ($8)</td></tr>
</tbody></table>
All that being said, in the "sub $20" range, the Monoprice 8320 headphones are <a href="http://www.head-fi.org/products/monoprice-8320-iem/reviews">well</a> <a href="http://lifehacker.com/5927570/the-monoprice-8320s-are-the-best-earbuds-youll-find-under-10">known</a> for being "audiophile quality" to budget consumers like me. I purchased them, and I agree. They sound great. But, the design hurts my outer ears - and, yes, I am wearing them correctly by wrapping them around the back of my ear. I had the same problem with my old pair of Koss Cans (I think they are called "Pathfinder In-Ear Headphones" now). The Koss headphones aren't cheap enough for this review, and they aren't nearly as good sounding as my favorite's here anyway.<br />
<br />
So, I began my search for something that sounds just as good for the same price. First up, I tried the Panasonic RP-HJE450 phones. I searched Amazon for phones that got great reviews under $20, and these popped up. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipdem3J7VnR_0yE_KkmPq54J2fqR8RoM63xJj-4fgHmNDKLJlCej7M8dNscVUb43U8kxvC34p6g-T4G_OhY7S4CARuAm4dyRd6YFy2lwjpzuJWApdBBOCYnyIKKe1lT7hUFSaXiqDVIvM/s1600/panasonic.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipdem3J7VnR_0yE_KkmPq54J2fqR8RoM63xJj-4fgHmNDKLJlCej7M8dNscVUb43U8kxvC34p6g-T4G_OhY7S4CARuAm4dyRd6YFy2lwjpzuJWApdBBOCYnyIKKe1lT7hUFSaXiqDVIvM/s1600/panasonic.jpg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Panasonic RP-HJE450 $20</td></tr>
</tbody></table>
<br />
These Panasonic's fit really well, but that's about all they have going for them. They have great frequency response, and the drums are incredibly clear, but there are NO mids. It's hard to make out the vocals, and the very faint echo you can hear from the room the music was recorded in is totally lost which makes the audio sound artificial. Equalizing up the mids really helps, so I'll keep them. Explosions in movies make my eye balls shake. So, there's that. Also, the cable is more than 50% split so I found it catching on everything.<br />
<br />
After that experience, I decided to go back to Monoprice and try out their other models. First up, the Monoprice 8321s.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPJ-ieOlwOdv4KQi-1MrhNjsNZc9LQ0Aa9on85EUBWcetyFf6_dBAzw9_uxrclktbM94IIOj2RBOZvWa8XkNieCr4Wh8jYveZo7ogmQWBlXl6ueyji0RkI8gHUEm2AyoeXwFARlMDeI3A/s1600/8321.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPJ-ieOlwOdv4KQi-1MrhNjsNZc9LQ0Aa9on85EUBWcetyFf6_dBAzw9_uxrclktbM94IIOj2RBOZvWa8XkNieCr4Wh8jYveZo7ogmQWBlXl6ueyji0RkI8gHUEm2AyoeXwFARlMDeI3A/s320/8321.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Monoprice 8321 $5</td></tr>
</tbody></table>
The difference between the 8320's and 8321's is more than 1 integer. The specs on Monoprice's site are totally different, and it's obvious. Regardless of how much you pump into these things, you will NOT be able to hear the bass drum resonate or anything that rings in the high end. These might as well be the crappy headphones that came with your smartphone - useless for anything other than talk radio.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAAFtXTFLietsa8dWujDcMbJhY__XXyXMri5EKZoCOXj2QfSdj-qylF8r6yq9CbC8G4QL1kTLW4oCb-DChT7WiqdvoGViUygx4OzKL8L_j1cAnZaRbznz4UEzOdjY2tofCENXq5cniAyg/s1600/9398.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAAFtXTFLietsa8dWujDcMbJhY__XXyXMri5EKZoCOXj2QfSdj-qylF8r6yq9CbC8G4QL1kTLW4oCb-DChT7WiqdvoGViUygx4OzKL8L_j1cAnZaRbznz4UEzOdjY2tofCENXq5cniAyg/s320/9398.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Monoprice 9398 $12</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDFEyHRsQxVQ_iFsLAuPKs58IVroKteerNg6EpG49cMwsfh_7-oq3rOJBxDFw20_Adgoh4f4QJ-mpqx6WRL2tkDgO4MLtPEW3yNrEko54M4mM0VJSAV6VkGN3iCN4YNeV10WTKaDQRkIM/s1600/9397.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDFEyHRsQxVQ_iFsLAuPKs58IVroKteerNg6EpG49cMwsfh_7-oq3rOJBxDFw20_Adgoh4f4QJ-mpqx6WRL2tkDgO4MLtPEW3yNrEko54M4mM0VJSAV6VkGN3iCN4YNeV10WTKaDQRkIM/s320/9397.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Monoprice 9397 $13</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPWQiptmPqUjVZJgAA2sNTKqoO0rwCSY6MTRWP8fW6uDwfmpM1v9ZXSw63n17FP8aknMTy3MsyzxoUOgXsMqKhB0stx1fk3FK4JhxuaSrfRnyrL0zZg99kW-HBgXb30JxFqLIB1RtZgbA/s1600/9396.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPWQiptmPqUjVZJgAA2sNTKqoO0rwCSY6MTRWP8fW6uDwfmpM1v9ZXSw63n17FP8aknMTy3MsyzxoUOgXsMqKhB0stx1fk3FK4JhxuaSrfRnyrL0zZg99kW-HBgXb30JxFqLIB1RtZgbA/s320/9396.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Monoprice 9396 $7</td></tr>
</tbody></table>
<br />
Finally, there are these three. The 9396, 9397 and 9398. The 98, despite having the largest product number is NOT the best sounding. It has really good metal construction, but it's REALLY bass-heavy and muddy. Granted, these are intended for "video gaming" so I guess gamers want to loose their low frequency hearing. With equalization, these sound fairly good for $12, but in comparison to all of the other's in this review, and without equalization, these are nearly as bad as the horrible 8321s. If you don't turn the bass down on those units, you will find the
rest of the audio washed out due to the fact that there is only one
driver in these, and all of the movement of the speaker is dedicated to
billowing whale sounds into your brain. They can probably be made to sound as good as the Panasonics with less equalization than the Panasonics.<br />
<br />
The 97s have a slightly larger driver than the 98s and are a little bass-heavy. They are excellent for under $15! Way better than the Panasonics, or the 98s.<br />
<br />
The 9396s sound even better! Without any equalization, they sound almost as good as (maybe better than) the 8320s! And, they are more comfortable than the 8320s. The only real complaints I have are that they are cheaply constructed and it's hard to find which one is the left or right one. I painted some white-out on the back of the right unit: problem solved. I think I'll probably purchase a few of these 96's.<br />
<br />
So, how do they stack up? The 8320s and 9396s are the best sounding, but these 9396s are my favorite due to comfort. The 8320s are my second favorite sounding and the 9397s are my second favorite over all.<br />
<br />
<b>Ranking</b>:<br />
<ol>
<li>Monoprice 9396 - Possibly the best sounding. The most comfortable from Monoprice. Cheap construction (buy a few).</li>
<li>Monoprice 9397 - Takes a little equalization to make the mids available.</li>
<li>Monoprice 8320 - Probably the best sounding, but they hurt my outer ear.</li>
<li>Panasonic RP-HJE450 - without equalization, the mids don't exist. With
equalization, there is enough response across the board to get really
good sound. The most comfortable overall. </li>
<li>Monoprice 9398 - I could hear the music somewhere behind the bomb going off inside my head. After equalization, amazing for $12.</li>
<li>Monoprice 8321 - Seriously, you're paying more for shipping. Go to the drug store and get some buds - they'll sound just as awful.</li>
</ol>
David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com11tag:blogger.com,1999:blog-2643923222133320297.post-57168489299908464462012-08-21T17:43:00.000-07:002016-11-11T09:19:57.464-08:00JQuery: Keeping the UI Responsive During Slow Page Loads With The Event QueueLet's say we have some really slow operation that works like this:<br />
<br />
<div class="code">
$("#container").empty();
$(giantArray).each(function(i, element){
verySlowRenderOperation(element).appendTo($("#container");
});
</div>
<br />
It's possible that this will make the page lock up while the above routine executes. Most web browsers will even become unresponsive while this happens. The most popular way to solve this problem is to use <a href="https://developer.mozilla.org/en-US/docs/DOM/window.setTimeout">setTimeout</a>. The problem with setTimeout is that things might occur out of order, but the above code should maintain order in some way. Here's a fairly elegant solution to the problem that uses jQuery's <a href="http://api.jquery.com/queue/">built in function queue</a> to delay processing of any queue of arbitrary functions in order. We can create a 1ms pause between each call which will free up the page to process other things and do all of our work in a separate thread.<br />
<br />
<div class="code">
$("#container").clearQueue(); //Prevent race conditions if a previous run is still pending.
$("#container").empty();
$(giantArray).each(function(i, element){
$("#container").delay(1).queue(function(){
verySlowRenderOperation(element).appendTo($("#container");
$(this).dequeue();
});
});
</div>
<br />
More information can be found in jQuery's documentation on <a href="http://api.jquery.com/queue/">queue()</a> and <a href="http://api.jquery.com/delay/">delay()</a>. <br />
<br />David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-27695636282436084442012-07-23T13:26:00.000-07:002012-07-23T13:26:51.833-07:00HTML is Diverging. This is a good thing.I'm seeing this news piece floating about the web today about <a href="http://www.webmonkey.com/2012/07/html-groups-part-ways/">HTML5 diverging into two separate standards</a>. The story was on Slashdot a few days ago and my <a href="http://slashdot.org/comments.pl?sid=2995609&cid=40726589">comment was modded +5</a>, so I figure it's worth reposting here for others to see:<br />
<br />
<br />
This is the similar to any source tree having a "development branch" and a "stable branch". WHATWG will be responsible for evolving the fast-paced devlopment branch of HTML while W3C will take occasional snapshots and stabilize the features of the development branch into "full standards". I assume that most of the complaints here are related to either bad marketing - WHATWG should just start calling their version HTML6 or "future HTML" or something - or the fact that these bodies (especially the W3C) move slowly and we are in the middle of a new stable branch getting pulled.<br />
<br />
By the way, HTML5 isn't, according to the W3C a standard yet. The current HTML standard is 4.0.1. HTML5 is planned to be a "full standard" in 2014. In that time, WHATWG will introduce dozens of new major features into what will probably be called either HTML6 or HTML5.1 when the W3C gets around to pulling another snapshot.<br />
<a href="http://en.wikipedia.org/wiki/HTML#Version_history_of_the_standard" title="wikipedia.org">http://en.wikipedia.org/wiki/HTML#Version_history_of_the_standard</a>David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-52743323548146635882012-06-12T09:29:00.000-07:002012-06-12T09:30:40.564-07:00Ubuntu Amazing Sound With Mediocre HeadphonesI have a pair of Koss UR40 Titanium headphones. I like them because they are fairly portable super light-weight open-air and completely cover my ears which means I can wear them an entire day without them hurting or getting too warm. They cost about $40 and have a lifetime warranty. And, they have a frequency response rating of 15hz-20,000khz. Since they are cheap, the mid-range frequencies are much more pronounced which means that super high and super low frequencies (guitar plucks, cymbals, bass guitar and the bass drum) are lost under a flat equalizer (what your computer normally puts out). But since they have that nice frequency response, we can make them sound acceptable with what's called <a href="http://en.wikipedia.org/wiki/Smiley_face_curve">the smiley face curve</a> equalizer. This will be the case with most headphones that cost under $150. So, here's how I made the problem a little better:<br />
<br />
<div style="font-family: "Courier New",Courier,monospace;">
$ deb http://ppa.launchpad.net/nilarimogard/webupd8/ubuntu precise main </div>
<div style="font-family: "Courier New",Courier,monospace;">
$ deb-src http://ppa.launchpad.net/nilarimogard/webupd8/ubuntu precise main </div>
<div style="font-family: "Courier New",Courier,monospace;">
$ sudo apt-get update</div>
<div style="font-family: "Courier New",Courier,monospace;">
$ sudo apt-get install pulseaudio-equalizer </div>
<div style="font-family: "Courier New",Courier,monospace;">
<br /></div>
<br />
<span style="font-family: inherit;">Now, run </span><span style="font-family: "Courier New",Courier,monospace;">pulseaudio-equalizer</span> and set it up like this. Make sure that nothing goes too far over 1.0 because it will cause saturation noise (it'll sound like crunchiness):<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3pUkzNfYWn9St__TbWM_eRmZpJbyXFtByqSatJz-Nv9cwpqcr4qTagYRNW-xDI5gq2cCZ04kgEs25mAXz5SJMsIPXIU7x3s4EniaedrA4jBTMMzXy8BljEKd0jHX-WJrs8Hb432JoKmw/s1600/Equalizer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="142" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3pUkzNfYWn9St__TbWM_eRmZpJbyXFtByqSatJz-Nv9cwpqcr4qTagYRNW-xDI5gq2cCZ04kgEs25mAXz5SJMsIPXIU7x3s4EniaedrA4jBTMMzXy8BljEKd0jHX-WJrs8Hb432JoKmw/s400/Equalizer.png" width="400" /></a></div>
<br />
If you want to see what a difference it makes, play your favorite song and check/uncheck "EQ Enabled" to see what it originally sounded like and what it sounds like now. You'll notice a HUGE difference.David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com1tag:blogger.com,1999:blog-2643923222133320297.post-50877944439647703872012-06-07T11:34:00.003-07:002016-11-11T09:20:05.369-08:00A Simple URL Redirection Service In JavascriptBit.ly, TinyUrl, goo.gl... You've probably had an opportunity to use a URL Shortening service in the past, but maybe you want one of your own that allows you to create your own custom shortened URLs with coherant names rather than "http://bit.ly/9SDFH43". Here's how I did it with a small client-side script I added to the index.html for my site
<br />
Just put the following code inside your home page:<br />
<div class="code">
<script language="javascript">
var key = window.location.href.split("?")[1].replace("/","")
var urls={
'delicious':'http://www.delicious.com/davidron',
'ssh':"http://sdf.org/ssh",
'blog':"http://blog.davidron.com"
}
if(key){
if(urls[key]){
window.location.href=urls[key]
}else{
document.write("'"+key+"' not found :(");
}
}
</script>
</div>
<br />
Now, all you have to do is edit that "urls" block to add more redirected URLs. Just make sure that each one except the last one should end with a comma (or else it won't work in IE).<br />
<br />
I embedded that code directly in this blog. You can surf to <a href="http://blog.davidron.com/?ssh">http://blog.davidron.com?ssh</a> and the browser redirects! I redirected http://davidron.com/ to http://blog.davidron.com? so that you can also surf to <a href="http://davidron.com/ssh">http://davidron.com/ssh</a> to get to the same redirect.David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-64035914703723657152012-05-31T11:51:00.002-07:002012-05-31T11:52:05.136-07:00Fix The Verizon FiOs Westell 9100em Arp Cache BugIt seems like the Verizon FiOs Westell 9100em has a very small ARP table that ages out old IP address ->mac address mappings before the lease has expired on those IP addresses. The symptom is that over time, you can't ping any device on your wireless network. You get a "host not found" or "destination host unreachable". One way I was able to solve this without introducing additional complexity (another wireless router) was to simply increase the frequency with which all machines must renew their IP addresses. The default is 24 hours, so I reduced it to 6 hours. Here's how:<br />
<ul>
<li>Log into your Westell 9100em (<a href="http://192.168.1.1/" shape="rect" title="http://192.168.1.1">http://192.168.1.1</a>)</li>
<li>Click advanced->IP Address Distribution</li>
<li>Click the "edit" button next to "Network (Home/Office)"</li>
<li>Change the TTL from 1440 to 360</li>
<li>Apply</li>
</ul>David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0tag:blogger.com,1999:blog-2643923222133320297.post-30826018076641956322012-04-20T09:42:00.001-07:002012-04-20T09:42:35.363-07:00Running Bookmarklets from the Firefox URL BarI have certain bookmarklets bookmarked with "keywords" in firefox so that when I am on a page, I can, for example, just navigate to the url "offline" which executes the <a href="http://getpocket.com/">pocket</a> bookmark. Mozilla has recently <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=680302">disabled this functionality</a> and there is now an add-on that adds that functionality back. <a href="https://addons.mozilla.org/en-US/firefox/addon/inheritprincipal/">This add-on</a> allows users to run bookmarklets from the command line in Firefox again.<br> David Ronhttp://www.blogger.com/profile/03498490798803568055noreply@blogger.com0