Received an EyeClops BioniCam for Christmas.
Device records magnified movies/images to a removable USB stick, in FAT16, which is hence, Linux (and Mac OS X) compatible..
Manufacturer warns that “the Eyeclops is a handheld bionic microscope meant to be used as a fun toy – not as an educational aid.”
I’m no expert, but a tech gadget, particularly an electronic microscope, which is easy to use, magnifies whatever you point it at to 400x, and captures live video or photographs… that seems fun, and educational, to me..
On July 27th, we carried out a thorough scan of 802.11 networks in downtown Denver. A Columbus V-900 tracking device was used to log location and path, validating that all streets and alleys were traversed. Meanwhile, an iPhone 3G and Wififofum were used to detect and log detected networks, their location, and attributes.
The area bounded by Speer Blvd to the SW, Broadway to the East, and 20th Avenue to the NE were covered. Total area traversed was approximately 1.4 square miles and 125 square blocks.
9,522 networks were detected in ~200 minutes.
- WPA2: 2418 (25.3%)
- WPA: 1843 (19.4%)
- WEP: 3294 (34.6%)
- None: 1967 (20.7%)
11.8% of the networks observed were hiding their ESSID, 88.2% were broadcasting.
On average, a new wifi network was discovered every 1.3s during the scan.
Number of wireless networks per…
- square mile: 6,800
- city block: 75
- acre: 11
Total population of the scan area is not known. A portion of the area, though, known as the Golden Triangle, has a population of 630 residents. In that neighborhood, 1506 networks were detected, for a total of 2.4 access points per person.
Downtown, pass 0 KML, Golden Triangle
Downtown, pass 1 KML, 17th to 20th
Downtown, pass 2 KML, Champa to Court
Downtown, pass 3 KML, Arapahoe to Wynkoop
Downtown, pass 4 KML, Commons Parks, Speer, Colfax
Since Pete Shipley first pioneered wardriving in the San Franciso-Bay area, many people have cataloged the locations of 802.11 networks around the world.
I still remember the excitement, in the early days, driving the streets of downtown with a makeshift antenna, Orinoco ‘Gold’ card, and the soft glow of a Thinkpad 600x illuminating the passenger seat. You could often drive several miles before the faint signal of a distant access point would flicker across the screen.
Much has changed since then, as the number and density of networks have exploded.
Also, small hand-held devices like the Nokia n810 and Apple iPhone are able to scan for networks, track location via GPS and log results, in a small and compact form-factor.
Still, though, it seems that few, if any, network surveys published offer truly comprehensive details on all detectable networks within a given area.
A couple of exceptions are a survey of metro Seattle performed by 100 undergraduate students who detected 5,225 networks in 2004 and the annual RSA/EMC wireless security survey of New York, London and Paris which attempts to log, and provide some analysis of, detectable wireless networks within those three cities.
Most other surveys continue to focus their efforts on main arteries and thoroughfares where large numbers of networks can be detected in short amounts of time.
In the following survey, an attempt is made to detect all available 802.11 wireless networks within the target neighborhood by traversing all publicly accessible streets, alleys and side-roads.
A constant and slow travel velocity is maintained to ensure that, given the antenna’s sensitivity and the detector’s scan rate, no available networks go undetected.
Multiple passes are made through the neighborhood to verify consistent detection.
For my experiment, I chose the Cherry Creek North neighborhood of Denver, CO.
According to the 2000 US census, the area has a population of 5,028 in 3,198 households. In addition, 320 businesses, mostly restaurants and boutiques, are located on the southern edge of the neighborhood.
The area covers approximately 0.5 square miles, and is bounded by 1st and 6th Avenues to the south and north, and by University and Colorado Blvds to the east and west.
On July 18th, a test scan was performed. A small segment of the target area was scanned repeatedly on foot, and by car, at various velocities. Results checked for accuracy and completeness.
On July 19th, I carried out a thorough scan of the neighborhood. A Macbook Air and Columbus V-900 tracking device were used to view precise location and path, validating that all streets and alleys were traversed. Meanwhile, an iPhone 3G and Wififofum were used to detect and log detected networks, their location, and attributes.
The 70 city blocks which make up the neighborhood were covered in just over two hours.
1,948 wireless 802.11 networks were discovered.
11.6% of the networks observed were hiding their ESSID, and 88.4% were broadcasting.
Most of the networks (57%) had weak or non-existent security activated.
- WPA2: 422 (21.7%)
- WPA: 406 (20.8%)
- WEP: 797 (40.9%)
- None: 324 (16.6%)
The location of the highest network density along the scanning path was detected at the intersection of 3rd Ave and Fillmore St, where 65 networks were detected simultaneously.
1,948 networks were detected in 2 hours, 29s within a 70 block area (0.48 square miles).
On average, a new network was detected every 3.7s during the scan.
Number of wireless networks per…
- square mile: 4,091
- city block: 28
- acre: 6
Let’s compare with the 2008 RSA/EMC study of New York City.
Their scan detected 9,227 networks and covered a 16 square mile area (conservative estimate) which included “the entire area of Manhattan, including Brooklyn, Manhattan and Williamsburg Bridges”.
That’s 576 access points per square mile, or less than 1/5th the density observed in the Cherry Creek North neighborhood.
It is doubtful that Cherry Creek North has a significantly more dense distribution of WiFi networks than Manhattan. More likely, the survey presented here is more comprehensive in its coverage.
The results show that, by using a rigorous scanning process, which utilizes multiple passes and takes into account the sensitivity and operational characteristics of the detector, network survey accuracy can be drastically increased.
In this survey, 1 wireless network was detected per every 2.5 residents in the neighborhood.
I have not been able to find any other documented survey which shows a higher density of access points per person or square mile.
If you’d like to view the results in Google Earth… click:
Cherry Creek North WiFi Scan KML
The financial market meltdown has caused many to re-evaluate their models for expected market returns.
In reality, though, the recent market swings are not out of the ordinary in terms of what has been observed in the past. They are still within the fringes of the ‘long tail’ of the distribution of daily market returns.
If there is an unpredictable and high impact event, waiting to spring on the market and challenge the perception of normal or ‘expected’, the “dot-com” bust, 9/11 and financial market meltdowns have not revealed any. None have been a ‘Black Swan’, to use Nassim Nicholas Taleb‘s term.
When it comes to the financial markets, if there is a large-impact event which requires counter-factual reasoning to model or comprehend, it has not yet occurred.
Daily leveraged funds have been the topic of numerous articles recently which have questioned whether they should be held for periods longer than a single trading session, and in turn, whether they are suitable for individual investors.
Take, for example, the Russell 2000 Index (^RUT), the iShares Russell 2000 ETF (IWM), which tracks the index, and the various daily leveraged funds which attempt to track the index to the tune of -3x (TZA, Direxion Daily Small Cap Bear 3X Shares), -2x (TWM, UltraShort Russell2000 ProShares), -1x (RWM, Short Russell2000 ProShares), 2x (UWM, Ultra Russell2000 ProShares) and 3x (TNA, Direxion Daily Small Cap Bull 3X Shares).
Over the YTD period through 6/19/09, all daily leveraged equivalents of the fund, both the bullish and bearish versions, have underperformed the index.
Indeed, the market’s extraordinary recent volatility highlights the perils of daily leveraged funds. Daily rebalancing over periods of price fluctuation, without successive days of monotonic price decrease or increase, have a significant negative impact on the value of leveraged funds.
Performance and Holding Period
Does that mean that leveraged ETF’s are always inappropriate for investment over periods longer than a trading session?
Leveraged funds are a relatively new invention in the ETF world, and most don’t have sufficient history to compare longer-term performance vs benchmarks.
Performance – Real vs. ‘Ideal’
Leveraged ETF’s do, though, track relatively closely the benchmarks they follow, with daily performance magnified by the intended degree of leverage. Take, for example, the +2x and +3x daily leveraged equivalents of the Russell 2000 index, over the first two months of Q2.
‘Ideal’ leverage can be calculated by applying a factor of ‘x’ to the daily returns of the Russell 2000 benchmark and comparing the result to the returns of the corresponding daily leveraged ETF. In reality, the daily leveraged ETF’s underperform slightly the synthesized ‘ideal’ returns. The difference represents tracking error and fund expenses, including management fees and the fund’s internal costs of borrowing to maintain leverage.
In this example, at the end of the period, the value of $1000 invested, and held, would be:
- +3x: $2117 – ideal (synthesized)
- +3x: $2080 – TNA (Direxion Daily Small Cap Bull 3X Shares)
- +2x: $1750 – ideal (synthesized)
- +2x: $1746 – UWM (Ultra Russell2000 ProShares)
- +1x: $1364 – ideal (Russell 2000 Benchmark)
- +1x: $1363 – IWM (iShares Russell 2000 Index)
Tracking Error and Expenses
The +3x leveraged fund (TNA) underperformed the ideal by $37 or 1.7%. The 2x leveraged fund (UWM) underperformed the ideal by $4 or 0.2%. And, the 1x leveraged fund (IWM) underperformed the Russell 2000 benchmark by just $1.
So, the leveraged funds do a pretty good job of tracking ideal leverage, even over periods far longer than a single trading session.
How about performance; can daily leveraged funds produce performance proportional to the daily leverage rate, over longer periods? In this example, during which market gains were mostly monotonic, they do. The benchmark was up 36.3%, the 2x fund up 74.6% (2.05x the benchmark), and the 3x fund up 108% (2.97x the benchmark).
This example period, though, was chosen with the benefit of hindsight, just as the selection of other, more volatile periods, can be chosen to highlight the risks.
Synthesized Long-Term Performance
How, then, do leveraged ETF’s perform over much longer holding periods? The data does not exist, because leveraged funds don’t have sufficient history, but using methods similar to those used to calculate the ‘ideal’ data points highlighted in the charts above, data can be synthesized by applying daily leverage to historical market data.
How would the $1000 invested in the NASDAQ composite perform versus $1000 invested in 2x and 3x leveraged versions of the index, over a very long period, such as since the index’s inception?
Certainly, not very well, since leveraged ETF’s aren’t appropriate for long holding periods, and since NASDAQ has been wracked by the “dot-com bust” and the recent world financial crisis and “meltdown”? Right?
Wrong. To the contrary, even if the recent recovery is excluded, daily leveraged investments perform handsomely, outperforming the index by factors which far exceed the daily leverage target:
$1000 invested in NASDAQ since inception (2/5/71) to 3/20/09…
- unleveraged: $14,572
- leveraged 2x: $46,236
- leveraged 3x: $31,026
Sure, leverage is not without its risk. And, the risk is significant.
For example, a $1000 investment in leveraged versions of the index, at the height of the “dot-com boom”, through 3/20/09, is nearly wiped out…
- unleveraged: $289
- leveraged 2x: $35
- leveraged 3x: $2
But, when timing is more fortunate, leveraged versions of the index can far exceed their advertised daily performance targets, even over long periods. $1000 invested in NASDAQ from 7/6/95 to end of .com boom (3/10/2000)…
- unleveraged: $5,360
- leveraged 2x: $22,456
- leveraged 3x: $73,262
Leveraged ETF’s are good at what they advertise, magnifying daily performance by their leverage ratio.
The performance of leveraged funds are influenced both by daily results and by the compounding leveraged effects of those results, which are affected more heavily than unleveraged funds, by the volatility of the target investment.
The real risk inherent in daily leveraged ETF’s isn’t really their performance or behavior when held long term; it is the fact that such funds are more volatile than the unleveraged versions by factors which exceed the advertised leverage ratio. Over longer periods, this translates into disproportionate risk which isn’t necessarily compensated by the leverage ratio.
* 2009.04.01 – 09:00
* Southampton Beach, Bermuda
* Pat, Dave and Guide (Chris)
* 35 ft max depth
* 30 ft visibility
* 35 minutes dive time
* 65 deg F water temperature
* 3000psi start
* 1000 psi end
* 12lbs weight
Shore dive with Dave and guide (Chris). Waded into light surf on private beach of Fairmont Southampton Hotel.
Swam directly south from hotel beach, over a limestone/coral wall and out into the open sea, across a sandy plain to large coral structures approximately 1/2 mile offshore. [near center of image above]
Saw and photographed a purple striped jellyfish
The Lotus Elise and 911 GT3 have dominated super-stock in national competition for three years running, but the Z06 continues to reign supreme in the SCCA rocky mountain Colorado and Continental Divide regions.
Driving a Z06, I took super-stock for the third year in a row. Points average was up by more than ten points per event, barely enough to ward off the challenge posed by several determined and talented Lotus Elise drivers who have kept things very close in the ultimate stock class.
* Event #1 – DNS
* Event #2 – 5th in Super Stock. 25th of 195 drivers
* Event #3 – 1st in Super Stock. 10th of 198 drivers
* Event #4 – DNS
* Event #5 – 1st in Super Stock. 11th of 159 drivers
* Event #6 – 3rd in Super Stock. 29th of 150 drivers
* Event #7 – 5th in Super Stock. 39th of 169 drivers
* Event #8 – 1st in Super Stock, 4th of 129 drivers
* Event #9 – 3rd in Super Stock, 28th of 179 drivers
* Event #10 – 2nd in Super Stock, 42nd of 150 drivers
* Event #11 – 2nd in Super Stock, 40th of 206 drivers
* Event #12 – 2nd in Super Stock, 21st of 153 drivers
* ranked #1 out of 21 in Super Stock
* ranked #16 out of ~800 active Colorado and Rocky Mountain region drivers
* 2004 RMDiv Solo2 Summer Series – 3rd in Super Stock, #34 overall – 946 pts/event avg
* 2005 RMDiv Solo2 Summer Series – 2nd in Super Stock, #22 overall – 953 pts/event avg
* 2006 RMDiv Solo2 Summer Series – 1st in Super Stock, #19 overall – 952 pts/event avg
* 2007 RMDiv Solo2 Summer Series – 1st in Super Stock, #19 overall – 957 pts/event avg
* 2008 RMDiv Solo2 Summer Series – 1st in Super Stock, #16 overall – 969 pts/event avg
Attached are some photos from a trip to Fowler-Hilliard, a 10th mountain division hut near Minturn.
I shot these pics with a Nikon D100, a 6 megapixel camera which takes SLR lenses.
The original images are 3008×2000. Click for higher detail.
Shot with a 105mm macro lense plus extension tube to allow extreme closup and high magnification. The big sphere is a dew drop on a small leaf measuring about 1/2 centimeter long. If you look closely at the surface of the droplet, you can see the faint reflection of the whole plant.
Backlit shot of an ‘Indian Paintbrush’, using the same 105mm macro lense without extension tube.
Moon at sunrise with a 300mm telephoto. Overexposing the image by several stops ‘blows out’ the details you’d normally see in the bright crescent but makes it possible to capture detail in the ‘dark’ part of the moon which is normally only faintly visible.
The hazy streak in the lower left is a lense flare.
Sunrise over the 10-mile range shot from the top of Resolution peak with a 17mm wide angle.