Kalyan Varma's Friends|
[Most Recent Entries]
Below are the most recent 45 friends' journal entries.
|Saturday, December 7th, 2013|
XScreenSaver 5.24 out now
. There are a few minor fixes, but the big change is that it now has an auto-updater
. After you've installed 5.24, it will let you know when a subsequent release is available and offer to download and install it for you. (This was tricky, because screen savers are weird and there are 204 of them.)
Oh yeah, I also pushed out Dali Clock 2.40 a little while ago that does the same thing. (That one was easier.)
No iOS versions this time, because the changes weren't relevant.
Mirrored from jwz.org.
|Stolen cobalt-60 found in Mexico; thieves may be doomed.
In the immortal words of J. Frank Parnell: "Oh... You don't wanna look in the trunk."
Mexico's public-health scare turned into a logistical hurdle Thursday as authorities sought to safely put a stolen load of radioactive material back into its container.
As officials worked on the material, federal police and soldiers formed a cordon of several hundred yards around the field in Hueypoxtla where a container of highly radioactive cobalt-60 was abandoned after it was stolen from truck drivers transporting it to a storage facility in central Mexico. [...]
The drivers of the cargo truck were sleeping at a gas station this week when gunmen assaulted them and stole their truck. Mexican nuclear safety officials said they believed the carjackers did not know what they were stealing and that they may die from exposure to the radioactive material.
The IAEA said in its statement that it "would probably be fatal to be close to this amount of unshielded radioactive material for a period in the range of a few minutes to an hour." It is unclear how long the material was handled by the carjackers or others who found it later.
Previously, previously, previously, previously, previously, previously, previously, previously.
Mirrored from jwz.org.
|Friday, December 6th, 2013|
|New Book: Carry On
I have a new book. It's Carry On: Sound Advice from Schneier on Security, and it's my second collection of essays. This book covers my writings from March 2008 to June 2013. (My first collection of essays, Schneier on Security, covered my writings from April 2002 to February 2008.)
There's nothing in this book that hasn't been published before, and nothing you can't get free off my website. But if you're looking for my recent writings in a convenient-to-carry hardcover-book format, this is the book for you.
I'm also happy with the cover.
The Kindle and Nook versions are available now, and they're 50% off for some limited amount of time.
Unfortunately, the paper book isn't due in stores -- either online or brick-and-mortar -- until 12/27, which makes it a pretty lousy Christmas gift, though Amazon and B&N both claim it'll be in stock there on December 16. And if you don't mind waiting until after the new year, I will sell you a signed copy of the book here.
Suggestions for a title of my third collection of essays, to be published in five-ish years, are appreciated.
|Telepathwords: A New Password Strength Estimator
Telepathwords is a pretty clever research project that tries to evaluate password strength. It's different from normal strength meters, and I think better.
Telepathwords tries to predict the next character of your passwords by using knowledge of:
- common passwords, such as those made public as a result of security breaches
- common phrases, such as those that appear frequently on web pages or in common search queries
- common password-selection behaviors, such as the use of sequences of adjacent keys
Password-strength evaluators have generally been pretty poor, regularly assessing weak passwords as strong (and vice versa). I like seeing new research in this area.
|Thursday, December 5th, 2013|
If there's one thing you can count on at a Live 105 show its the roiling cloud of pot smoke, regardless of the band. It's cool "bro", I remember my first concert.
I wish the weekend stoners would start bringing brownies instead. Eventually all the rappers would be all, "Throw your cookies in the air! Who likes ginger snaps yo!"
Mirrored from jwz.org.
|Friday, December 6th, 2013|
|Bing Ads api bug report
I'm having trouble using the bing ads api
, and I had complained (loudly) via Twitter. I had already noticed, shall we say, certain egregious errors in their docs. In response, their customer support folks had asked me to email the errors I had noticed to them
. Surprise, surprise!! The email address I was asked to send to is apparently "protected" and bounced with the message" Your message can't be delivered because delivery to this address is restricted. "
W T F??? So, here goes, Bing Ads, these are the errors I noticed in your "documents".
- The getting started page talks about the headers which are supposed to be sent with every request. Just above the section on required headers, there is a disclaimer that only AuthenticationToken must be used instead of username and password. So which auth header is supposed to be used?? And why is username/password there in the docs if it's not being used??
- The link to the getting started video refers to the v8 api, and uses username/pasword instead of oauth.
- The examples for python all refer to v8, not v9. Further, they all hand craft the soap xml! And, of course, the 2-3 ones I tried don't work.
- This example again uses username/password instead of AuthenticationToken. It also doesn't work. It also uses a namespace for the auth tokens xmlns:cus="https://adcenter.microsoft.com/api/customermanagement" whereas the "getting started" link doesn't mention any such namespace.
I didn't have the patience to try beyond this. I'm frankly shocked that even the "getting started" section is so replete with errors. I don't know if people even use this api, and if Bing is serious about supporting an api??
|Thursday, December 5th, 2013|
Why did they name it this??
I upgraded from MacOS 10.8.3 to 10.9. I didn't want to, but it was bound to be necessary eventually.
Pro tip: if you want to do a clean install, but don't want to spend 48 hours restoring your photos and music from backup afterward, you can now do it like so:
- Boot the installer.
- Open Disk Utility from the installer's menu and mount your drive.
- Open Terminal from the installer and rm -rf everything except /Users.
- Install. /Users will remain.
- chown to taste.
I've been using it for about a week, and here are the things that suck most about 10.9:
- You can't use iTunes 10.7 to sync a phone.
I was careful to download the iTunes 10.7 package before upgrading, and not let iTunes 11 touch my Music directory before deleting iTunes 11 and re-installing 10.7. However, it turns out that while you can run iTunes 10.7 on OSX 10.9, what you cannot do is sync anything. No local backups, no transfer of local MP3 files to the phone. Presumably no Xcode. So if your phone has no music files on it and you do your backups through iClod, I guess you can keep using 10.7. Otherwise, you're fucked.
- It goes without saying that iTunes 11 is a complete disaster.
- No "iTunes DJ" ("Up Next" is a terrible substitute).
- No way to play higher rated songs more often.
- No way to anonymously request songs from Remote.app.
- No way to open multiple windows.
- The multi-screen support has gone completely insane. It's nice having a menubar on each screen, I guess (though honestly I don't care) but they changed the behavior so that apps no longer remember which screen they were on! When I launch iCal, for instance, sometimes it's on my main screen and sometimes on my second screen instead of staying where I put it.
There's a workaround for this, but it's a hassle:
- Run "Mission Control" and click the "plus" box in the upper right corner of your main screen (it doesn't look much like a plus box) to make a second, blank, "space" on that screen.
- Now the context menu of each item in the Dock will have a new option, "Assign to Desktops on Display 1" or "Assign to Desktops on Display 2". Using this, you can lock an app to a particular screen.
- You have to do this for every app.
- But if you don't want all of a particular app's windows on the same screen -- for example, you want the main window on one screen, and status windows on another -- you're fucked. You have to move them manually every time they open.
But I have just discovered that you can go back to the old way by de-selecting "Displays have separate Spaces" in Mission Control preferences, but then you have to reboot. I didn't realize it was working.
- The Mail.app icon is no longer badged with the number of unread messages.
As before, I have my "Dock unread count" and "New message notifications" prefs set to a smart mailbox that includes the various folders in which new messages appear. Notifications work, badges don't.
Oh, except then I rebooted and now Mail.app is permanently badged with "1" regardless of the number of unread messages. How very.
Oh, this appears to be because my "Biff Mailboxes" smart mailbox is permanently badged with 1 unread message. Though when I sort by unread, it shows me no unread messages in it. This seems to now be true of most of my smart mailboxes: they all have completely random and untrue unread counts.
Maybe blowing away Spotlight -- again -- will fix it. I'll know in a couple of days.
- Mail.app removed the "Hide" button next to the "MAILBOXES" section. Since I have multiple identities that arrive at the same IMAP server, with different inboxes per account, I never use the privileged and undeletable "Inbox" folder. It's always empty: nothing is delivered there. Before, I could move the "MAILBOXES" section to the bottom and close it with "hide" but now I can't so it's always there taking up space.
- The CPU load meters in Activity Monitor are even more hideous than before. I thought skeuomorphism was out of favor now?
- And they are no longer restored when the app restarts.
- For some reason, using Privoxy as Safari's proxy server has become really unreliable: about 10% of URLs get an immediate "proxy server not responding" error (not a timeout). Nothing in Privoxy's logs or system.log indicating the failure.
- Safari removed the ability to take the "Top Sites" icon out of the Favorites bar. Fuck you.
- Safari auto-quits all the time if no windows happen to be open. Thanks for making it take an additional 5 seconds for me to open a page.
Possibly this fixes it?
defaults write -g NSDisableAutomaticTermination -bool yes
I kind of liked it that Preview and certain other apps auto-quit, so I wish I could turn it off just for Safari. That seems to be global.
- I have a fun Filevault bug. When the machine cold-boots and asks for a password to unlock the disk, there are certain letters I can't type. Let's say I can type A, B and D, but not C or E. It's crazy. This is with my favored old keyboard, through a PS2/USB adapter. So I plug in the "official" Apple USB keyboard. Can't type the characters there either. Unplug my "real" keyboard: now I can type those characters on the Apple keyboard. That's right, the presence of one keyboard is disabling keys on the other.
This only happens on the boot screen, not once the machine is up and running. After that, it's fine. So I have to have a second keyboard around every time I reboot. Oh and it only happens most of the time.
- It is currently only 45°F in San Francisco. I know this is not strictly Apple's fault but I'm going to blame it on "Mavericks" anyway.
Mirrored from jwz.org.
Here's a new biometric I know nothing about:
The wristband relies on authenticating identity by matching the overall shape of the user's heartwave (captured via an electrocardiogram sensor). Unlike other biotech authentication methods -- like fingerprint scanning and iris-/facial-recognition tech -- the system doesn't require the user to authenticate every time they want to unlock something. Because it's a wearable device, the system sustains authentication so long as the wearer keeps the wristband on.
|You will know my name is the LORD when I exterminate all rational lambda calculus.
King James Programming: Posts generated by a Markov chain trained on the King James Bible and Structure and Interpretation of Computer Programs.
3:23 And these three men, Noah, Daniel, and Job were in it, and all the abominations that be done in (log n) steps.
45:5 Thine arrows are sharp in the heart of man to be ruler over my people, over whom I have no son to keep the procedure general, we express the process in terms of a physical analogy: Think of the diagram as a maze in which a marble is rolling.
We argued that a rational-number representation could be anything at all that the LORD will make.
The global environment is chosen here, because this is the will of God.
God saith, What hast thou to do with negotiating the transition between imperative statements (from which programs are constructed) and declarative statements?
22:14 The mouth of strange women is a deep and wonderful property of computation.
Previously, previously, previously, previously, previously, previously.
Mirrored from jwz.org.
|Kora Kagaz: Scenes from a crumbling marriage.
Marriage is a complex institution. It brings its own share of requirements, rewards and challenges.
Too bad one isn’t handed a manual at the time of tying the knot, only experiences render a lesson in maturity. Blissful companionship can only develop if two people who are genuinely fond of each other resolve to respect, appreciate and understand their partner’s needs and expectations.
The operative word here being two. Any more and a marriage bears the potential to crumble or crash.
Director Anil Ganguly’s 1974 marital drama, Kora Kagaz, starring Jaya Bhaduri and Vijay Anand, emphasises why this space is created for two members only and how unsolicited interference from a third will invariably cause a rift.
Additionally, the restraint Kora Kagaz exercises, considering the meddlesome third party here is the girl’s overbearing mother, is most impressive. What typically works as a time-tested ploy to unleash high-pitched melodrama is perceived with surprising humaneness in this feature.
Like Khamoshi, which I revisited last week, Kora Kagaz too is based on Ashutosh Mukhopadhyaya’s story, first adapted in Bengali as Saat Pake Bandha (1963) by Ajoy Kar with Suchitra Sen and Soumitra Chatterjee in the lead.
‘Phir wohi din, wohi suraj ki pheeki pheeki roshni mere jeevan ke pathjhad mein hansti hui nazar aati hai,’ reflects Archana (Jaya Bhaduri) in a cheerless tone while gazing through the lone window.
Wearing a plain white sari and thick pair of glasses, her subdued school teacher embraces every new morning with a sense of weariness and monotony that sets in when life ceases to be vibrant, momentous or passionate. She’s only off-guard in the company of innocent children.
Her desolation, articulated to perfection in the poignant title song, works as a prelude to Kora Kagaz, which unfolds in flashback.
Though hailing from a wealthy family, Archana is a grounded girl who seems to have inherited more of her academician father’s (A K Hangal) moral values than status-conscious mother’s (Achala Sachdev) materialism. She falls in love with Literature Professor Sukesh (Vijay Anand), a scholarly idealist, with a modest salary, unwilling to compromise his ethics and join the money-obsessed rat race.
Even if Archana’s mother is prejudiced against him from day one, her father appreciates Sukesh’s sentiments. Archana and Sukesh get married without any real hitch.
Problems arise when Archana’s mom begins to unwittingly remote control their marriage. If the same premise was treated with exaggeration in films like Pyar Jhukta Nahi and Chalte Chalte, it is treated rather realistically in Kora Kagaz.
Like the scene where Archana and Sukesh visit her family with lavish gifts for her parents, siblings and domestic help, Archana’s mother remarks on the needless expenditure and how they ought to maintain a budget. Her intention is well-meaning but the timing of her words injures her proud son-in-law a great deal.
Sukesh is conscious of his wife’s previously comfortable lifestyle and aims to please her in a ‘budget’ that’s feasible for him. But her mother’s oneupmanship frustrates him beyond repair. She gifts them a refrigerator, installs a telephone in their flat, lies about his profession and humiliates him repeatedly.
All this time, Archana is unable to understand the degree of insult her mother has inflicted on her significant other’s dignity. This hurts the overworked Sukesh (now taking extra tuition) gravely. Unable to sit down and talk it through, they both desperately try to mend things through romantic gestures, which trigger further heartbreak.
The fragile state of their collapsing marriage is evinced in that intimate scene (handled with utmost grace and subtlety by Ganguly) wherein Sukesh tries to appease his cross missus by decorating their bedroom with flowers. She rebuffs him in a move that indicates the hostility has reached its zenith.
Released in 1974 — a year after Jaya Bhaduri did another film about marital discord against the backdrop of frail egos in Hrishikesh Mukherjee’s Abhimaan alongside husband Amitabh Bachchan—there’s nothing repetitive about her performance in Kora Kagaz.
Unlike Abhimaan’s reserved Uma, Archana speaks her mind and follows her heart. She can be both sarcastic and accommodating as per her will. Essentially though, they’re both graceful and generous human beings.
Jaya Bhaduri lends Archana so much dignity, it’s hard not to feel biased and view Kora Kagaz from her point of view alone. At 26, she demonstrates the composure and calibre of a veteran. Her performances are too pure to be estimated in praise. Understandably, she was awarded with Filmfare trophy for Best Actress.
Vijay Anand doesn’t have the screen presence of his superstar brother, Dev but he suits the role of a strong-minded intellectual. Ganguly understands his personality and works the character in ways that will make him likeable for what he is. If only, there weren’t so many continuity issues with the filmmaker and occasional actor’s ever-changing hair.
As the uppity mother unconsciously causing havoc in her daughter’s life, Achla Sachdev is absolutely terrific. She successfully conveys the absence of malice in her character while highlighting the tendency to weigh ingenuity with income without shrieking, screeching or creating a ruckus.
Sombre toned for most part, Kora Kagaz allows some comic relief through the side track of Archana’s brother (Dinesh Hingoo, long before he became a loud, giggling ham) and his best friend (a delightful Deven Varma) as two good-for-nothing blokes devising business schemes that never take off.
Kora Kagaz has only three songs composed by Kalyanji Anandj but ranks among their best works.
To begin with, there is Kishore Kumar’s stirring rendition of the despondent ditty, Mera jeevan kora kagaz, pertinently penned by M G Hashmat. Ensuing accolades put Kishore da in a celebratory mood. “We were very close, and the only party he [Kishore Kumar] ever threw in his house was in our honour after our award (Filmfare for Best Music) for Kora Kagaz,” recalls Anandji.
The soundtrack also features Mera padhne mein nahin lage dil and Roothe roothe piye both performed by Lata Mangeshkar. Her perfection in the latter track fetched her a second National Award (following Parichay).
Speaking of National Awards, Kora Kagaz, which fared reasonably at the box-office, won for Best Popular Film.
Kora Kagaz is that rare, sensitive film, which depicts the brittleness of bonds when not allowed to blossom at their own pace and accord. Newly-weds need time and breathing space to adjust to a new environment, a person. The baggage of what a third person says or expects only adds to their woes and leads to pointless bickering and bitterness.
At the same, if this other person doesn’t have the sense to step back, it’s imperative you draw the line. If they truly mean well, they’ll only feel happy if you are.
And so, despite, doing nothing wrong (Na pavan ki, na chaman ki, kiski hai yeh bhool?), Sukesh and Archana hurt their chances by allowing someone else to control their marriage. Compromise isn’t a seemly word but understanding was specially coined for this relationship.
Kora Kagaz doesn’t dwell on this alone. It looks at marriage even after two people have gone their separate ways without a real sense of closure. Because, in their case, there could never be one. Which makes the optimistic conclusion in the final railway waiting room scene not a cinematic copout but a genuine, heartfelt culmination.
This article was first published on rediff.com.
The bitter irony and burning intimacy of Awara.
The seductive imagery of Feroz Khan’s Qurbani.
Half Ticket: Kishore Kumar’s return to childhood.
Rishi-Neetu’s inexhaustible charm in Khel Khel Mein.
Shakti: Dilip Kumar-Amitabh Bachchan’s ultimate face-off!
Love, longing and disillusionment in Gharonda.
Junglee: Loads to yahoo about Shammi-Saira’s breezy romance.
Johnny Mera Naam | Khamosh| Ittefaq | Lal Patthar | Umrao Jaan | Qayamat Se Qayamat Tak | Sikander | Ram Aur Shyam | Teesri Manzil | Chashme Buddoor | Mili | Aag | Taxi Driver | New Delhi | Chaudhvin Ka Chand | Salaam Bombay | Jaane Bhi Do Yaaron
|The Problem with EULAs
Some apps are being distributed with secret Bitcoin-mining software embedded in them. Coins found are sent back to the app owners, of course.
And to make it legal, it's part of the end-user license agreement (EULA):
COMPUTER CALCULATIONS, SECURITY: as part of downloading a Mutual Public, your computer may do mathematical calculations for our affiliated networks to confirm transactions and increase security. Any rewards or fees collected by WBT or our affiliates are the sole property of WBT and our affiliates.
This is a great example of why EULAs are bad. The stunt that resulted in 7,500 people giving Gamestation.co.uk their immortal souls a few years ago was funny, but hijacking users' computers for profit is actually bad.
|New Research: Cheating on Exams with Smartwatches
A Belgian university recently banned all watches from exams due to the possibility of smartwatches being used to cheat. Similarly, some standardized tests in the U.S. like the GRE have banned all digital watches. These policies seems prudent, since today’s smartwatches could be used to smuggle in notes or even access websites during the test. However, their potential use for cheating goes much farther than that.
As part of my undergrad research at the University of Michigan, I’ve recently been focusing on the security and privacy implications of wearable devices, including how smartwatches might be used for cheating in an exam. Surprisingly, while there’s been interest in the security implications of wearable devices, the focus within the research community has been on how these devices might be attacked rather than on how these devices challenge existing social assumptions.
ConTest: A Smartwatch App for Collaborative Cheating
As a proof of concept, I developed ConTest, an application for the Pebble smartwatch that shows how students could inconspicuously collaborate on multiple-choice exams in real time. ConTest allows students to select a question, vote on answers, and view the most popular solution based on all of the responses from other students taking the exam. Prior to an exam, students pair their watches with their smartphones and choose the exam that they are taking. During the exam, the smartphone—hidden in the student’s pocket or backpack—facilitates communication between the smartwatch and a cloud-based aggregation service. All user interaction during the exam takes place on the smartwatch itself with simple, inconspicuous button presses.
ConTest demonstrates how hard such an application can be to detect. It displays the question number and answer by inverting a small number of pixels in digits of the time and date. For example, in the figure below, the red-circled block of missing pixels in the seven indicates that the user has voted for answer B. The purple-circled block of pixels in the five indicates that the most popular answer selected by other users is D. Similarly, the question number is encoded with missing pixels in the top date digits using a binary encoding.
Although users can see this interface at close range, it’s practically invisible from more than a couple of feet away, and the cheating application looks just like a regular watch face.
Disrupting Security Assumptions
The obvious solution for preventing students from cheating using smartwatches is to ban watches from exams, just as Artevelde College in Belgium recently did. But the devices will continue to evolve, both decreasing in size and detectability, and increasing in capability and ubiquity. Future form factors are likely to be even less conspicuous and enable more unique attacks—think smart contact lenses or implantable smartphones. Outright bans may not be desirable or even feasible. In the long run, we will need to adapt in more drastic ways, perhaps by abandoning traditional exams as a form of student assessment.
While wearable devices offer an exciting platform for new types of applications, they also upend implicit security assumptions that are built into many everyday social contexts. Testing centers assume that watches don’t talk to the Internet; casinos assume that eyeglasses aren’t heads-up displays. ConTest demonstrates that even today’s technology challenges present threat models. The time has come for the research community to start considering the attack vectors introduced by this new class of technology, and for all of us to start adapting our assumptions and threat models based on an awareness of such devices.
If you’re interested in a more detailed discussion about ConTest and the security implications of smartwatches, see this technical paper I coauthored with Zakir Durumeric, Jeff Ringenberg, and J. Alex Halderman.
|Wednesday, December 4th, 2013|
|Thursday, December 5th, 2013|
|Digital radio broadcasting in Brazil, a technopolitical struggle.
On the last week of November/2013 the second edition of ESC took place in Rio de Janeiro, Brazil. ESC is the acronym to “Espectro, Sociedade e Comunicação” (Spectrum, Society and Communication); as the name suggests people in this meeting discussed a fair use of the Radio spectrum in order to empower society by the use of a multiple and free mean of communication: the digital radio broadcasting.
Yes, the radio broadcasting is still important in many ways and not only in Brazil. At least since Bertolt Brecht (1898-1956) wrote “Radio as a means of communication” in 1932 there is a struggle related to the right to speak trough the radio waves. Communitarian and unlicensed free radios have been trying to survive despite the efforts from big communication groups to take them down. The radio spectrum scarcity has always been used as technical excuse to keep the communication power concentrated in fewer hands.
But now this picture can be changed in Brazil. More than two years ago the Brazilian House of Representatives created a special commission to study the digitalization of the radio broadcasting in order to facilitate the choice of the standard that will be adopted nationally. No decision was taken until now mostly because there is a strong dispute between two technological standards. This dispute clearly is a technopolitical matter.
The already established communication conglomerates support HD Radio lobbying in its favor through their class association ABRA (Brazilian Broadcasters Association). The HD Radio is a closed and proprietary standard, therefore broadcasters must pay a licensing fee to adopt the technology. Component manufacturers must get a license from patent holders and they are not able to adapt and change the standard. On the other hand DRM (Digital Radio Mondiale) is an Open Standard based on free hardware and software and has been supported by free radios as well as academic researchers. Given the openness of the DRM standard, the national industry would be able to produce the basic equipments and adapt the technology to some regional characteristics and necessities. In the Amazon region, for example, shortwave transmissions have an important role in connecting isolated locations within the rainforest. HD Radio does not work with shortwave transmission; DRM does.
The radio broadcasting itself is changing its nature by being digitized. As we know very well the digital content is not restricted to sound, then new applications and features are being produced within the scope of digital radio. This convergence with information technologies makes the dispute even harder because at the end of the day we are talking about having a significant amount of the radio wave spectrum working with free and open technology that may transmit multimedia content. I cannot say for sure what the digital radio broadcasting will become from now but it seems that the Brecht proposals are still making sense: the “radio could be the most wonderful public communication system imaginable, a gigantic system of channels — could be, that is, if it were capable not only of transmitting but of receiving, of making the listener not only hear but also speak, not of isolating him but of connecting him”.
|Email to bngbirds egroup, about the Eaglenest WLS trip
On a tour organized by Geetanjali Dhar's IT Nature Club, ten of us visited Nameri (a morning's birding) and Eaglenest Wild Life Sanctuary (WLS) at Lama and Bompu Camps.
Though several experienced birders asked us why we were going at the end of November, the birding exceeded all our expectations. On the very first day, I saw two Buguns, and a couple of days later, we got to see seven of the birds, so I cannot but feel that these birds are thriving in the areas around Eaglenest, if a bunch of amateurs like us could see so many!
Other highlights were the Fire-tailed Myzornis, the Chestnut-headed Tesia, the Red-headed Trogon and a female Ward's Trogon...I think it was only the Tragopan (on the list of some of the "focus" birders) that eluded us. I don't think any of us have ever had such a list of lifers before this!( want to read more?Collapse )
Thanks to Amith Kumar for adding some of tbe birds missing in the original list. That, too, as he, Gowri, Kannan and PK prepare to leave for Thattekkad tomorrow! Current Mood: must sleep
|Wednesday, December 4th, 2013|
|The only four videos I took on the trip to Eaglenest WLS, 301113
Here are two videos of a
bathing, and disporting itself, in the Kameng (which becomes the Jia Bharolli river in Assam) River:
We saw this
foraging along the far bank of the Kameng River:
And we got the beautiful
in the trees along the riverbank, too.
(the video will give an idea of the distance the bird was at!)
This was one trip where I did not miss my DSLR and the 300mm lens at all...in fact, electricity (even with the help of a generator) was in such scarce supply that I was thankful that I didn't have to lug a heavy camera over hill and dale and then hunt for a power source to charge it with, overnight!
So, in the days to come, you'll just see my usual SMS (Shamelessly Mediocre Shots) of people, scenery,butterflies, plants, wildflowers, insects, and so on....whatever caught my eye and fancy, not just the birds! Current Mood: dispirited
|Evading Airport Security
The news is reporting about Evan Booth, who builds weaponry out of items you can buy after airport security. It's clever stuff.
It's not new, though. People have been explaining how to evade airport security for years.
Back in 2006, I -- and others -- explained how to print your own boarding pass and evade the photo-ID check, a trick that still seems to work. In 2008, I demonstrated carrying two large bottles of liquid through airport security. Here's a paper about stabbing people with stuff you can take through airport security. And here's a German video of someone building a bomb out of components he snuck through a full-body scanner. There's lots more if you start poking around the Internet.
So, what's the moral here? It's not like the terrorists don't know about these tricks. They're no surprise to the TSA, either. If airport security is so porous, why aren't there more terrorist attacks? Why aren't the terrorists using these, and other, techniques to attack planes every month?
I think the answer is simple: airplane terrorism isn't a big risk. There are very few actual terrorists, and plots are much more difficult to execute than the tactics of the attack itself. It's the same reason why I don't care very much about the various TSA mistakes that are regularly reported.
|Biodegradable containers on a very long train journey
I was away in the foothills of the Himalaya, very far away (the journey from Bangalore to Guwahati alone was 2992 km, and passed through 8 states...Karnataka, Andhra Pradesh, Tamil Nadu, Odisha, Bihar, Jharkhand, West Bengal and Assam!) and came back with such a variety of experiences.
Just wanted to mention, in today's world of plastic bags and cups, the biodegradable containers that are still being used in many places, that I saw on the train.
One is what is called, in Bengali, "bhAnd" (or matkA in Hindi)...the earthen cup that can be thrown on the tracks, or out of the window, or anywhere, because the mud will disintegrate again:
This one contained "mishti doi" (sweet yogurt), one of the specialities of Bengal.
It was covered by a bit of newspaper and a rubber band:
This was, after ages, a typical "bhAndEr chA" (tea-in-a-mud-pot)...it has a taste all its own:
As you can see, it goes well with the cryptic crossword (that's from The Telegraph).
When the "JhAlmoodiwAlA" came by, I asked for some,
I got it in this "tongA", which is Bengali for this kind of paper container:
It's better than an ordinary folded paper bag, as when it opens out, it has some width, and sits well on a countertop:
The third container I liked very much was the "dOnA" (called donnai in Tamizh, and dOnA in Kannada, too), which is made of a certain type of leaf;they are stitched together, very expertly, and shaped into bowl-shapes:
Can you see the way the stick has been expertly used to "stitch" the leaves in place?
How I wish these traditional, "eco-friendly" (to use a hackneyed term) and full-of-character containers continue to be used, and are NOT replaced by Ghastly Plastic.... Current Mood: sad
|Tuesday, December 3rd, 2013|
|Monday, December 2nd, 2013|
|BART: Powered By Dali Clock
A local operative reports in with this important information:
It seems BART has some Linux image that they push out to laptops in the station operator booths. They are all running dali clock!
Mirrored from jwz.org.
|Mark your calendars
Upcoming events of note:
|Tue, Dec 03: ||Atlas Obscura: The History of Rum @ DNA Lounge |
|Tue, Dec 03: ||Happy Fangs, Gold Boot, Faux Canada @ El Rio |
|Thu, Dec 05: ||Chvrches, Nonono, Portugal The Man @ Mezzanine |
|Fri, Dec 06: ||Point Break Live @ DNA Lounge |
|Thu, Dec 12: ||Vicerine @ DNA Lounge |
|Thu, Dec 12: ||Banks @ Popscene |
|Tue, Dec 17: ||Atlas Obscura: Holiday Obscura @ DNA Lounge |
|Fri, Dec 27: ||Jessie Evans @ Hemlock |
|Tue, Dec 31: ||Bootie NYE Shit Show @ DNA Lounge |
|Tue, Dec 31: ||The Vau de Vire thing @ The Armory |
December's always thin. What have you got?
Mirrored from jwz.org.
|The TQP Patent
One of the things I do is expert witness work in patent litigations. Often, it's defending companies against patent trolls. One of the patents I have worked on for several defendants is owned by a company called TQP Development. The patent owner claims that it covers SSL and RC4, which it does not. The patent owner claims that the patent is novel, which it is not. Despite this, TQP has managed to make $45 million off the patent, almost entirely as a result of private settlements. One company, Newegg, fought and lost -- although they're planning to appeal. The story is here.
There is legislation pending in the U.S. to help stop patent trolls. Help support it.
|How Antivirus Companies Handle State-Sponsored Malware
Since we learned that the NSA has surreptitiously weakened Internet security so it could more easily eavesdrop, we've been wondering if it's done anything to antivirus products. Given that it engages in offensive cyberattacks -- and launches cyberweapons like Stuxnet and Flame -- it's reasonable to assume that it's asked antivirus companies to ignore its malware. (We know that antivirus companies have previously done this for corporate malware.)
My guess is that the NSA has not done this, nor has any other government intelligence or law enforcement agency. My reasoning is that antivirus is a very international industry, and while a government might get its own companies to play along, it would not be able to influence international companies. So while the NSA could certainly pressure McAfee or Symantec -- both Silicon Valley companies -- to ignore NSA malware, it could not similarly pressure Kaspersky Labs (Russian), F-Secure (Finnish), or AVAST (Czech). And the governments of Russia, Finland, and the Czech Republic will have comparable problems.
Even so, I joined a group of security experts to ask antivirus companies explicitly if they were ignoring malware at the behest of a government. Understanding that the companies could certainly lie, this is the response so far: no one has admitted to doing so.
Up until this moment, only a handful of the vendors have replied ESET, F-Secure, Norman Shark, Kaspersky, Panda and Trend Micro. All of the responding companies have confirmed the detection of state sponsored malware, e.g. R2D2 and FinFisher. Furthermore, they claim they have never received a request to not detect malware. And if they were asked by any government to do so in the future, they said they would not comply. All the aforementioned companies believe there is no such thing as harmless malware.
|Waheeda Rehman’s haunting melancholy in Khamoshi.
Kindness respires in the word ‘nurse’ both as a role and action. But what if her unconditional nurturing and consideration in the garb of research/treatment sows seeds of attachment and an undisclosed need for reciprocation?
Asit Sen’s Khamoshi documents the bitter heartbreak and irony that ensues in his 1969 black and white classic.
Based on Ashutosh Mukherjee’s short story, Nurse Mitra, which Sen adapted a decade before in Bangla as Deep Jwele Jaai led by a stellar Suchitra Sen, Khamoshi comes alive in Waheeda Rehman’s wistful eyes and Hemant Kumar’s haunting score.
One of its most iconic moments is rousing as ever — a sad but stunning Rehman embracing Kalidas’ Meghdoot in her arms, unhurriedly ascends the stairs, clad in a bordered chiffon sari that gently traipses her steps, a mellow whistle serenades the breeze, a young man stands contemplating by the balcony (someone we seldom see but clearly identify) and Hemant da’s soulful appeal Tum-mmmm pukaar lo fills the suddenly bare hospital and an unquestionably awed screen with hypnotic allure.
Khamoshi owes much of its success to such lyrical melancholy. Most of it is exhibited through the expressive emotionality of Waheeda Rehman. All the cat eyeliner in the world cannot accentuate her glimmering, compelling orbs, which speak volumes even when she does not. (Though the legendary leading lady believes her critically-acclaimed work here is nowhere as perfect as the original Radha — Suchitra Sen’s unaffected performance.)
Director Sen sets the premise of her inner conflict in a fascinating manner. From the first scene itself, the viewer is informed of the reason behind nurse Radha’s desolation (Rehman). The writings in her diary reveal her genuine disappointment over Dev’s (Dharmendra) gratitude when she expected more, much more.
Here’s why. Radha is assigned to comfort Dev, a psychiatric patient, by Dr Colonel Saab (Nasir Hussain in a role screaming for Ashok Kumar) in a specialised manner to prove his theory of how compassion in a woman can cure a man of his mental illness without resorting to electric shocks or heavy-duty medication.
His ‘since God couldn’t be present everywhere he created woman’ argument sounds like a pretty compliment but doesn’t serve as a convincing basis for a medical thesis.
And so what starts out as another day at work ends up taking a lot more out of Radha than she had bargained for. The nurse in her tends to the despairing Dev’s insecurity and anxiety but the woman inside is drawn to the devastatingly handsome man on the surface.
Thrust into constant intimacy, Radha finds herself hopelessly tangled in a muddle of empathy and expectations.
Tragically, Dev doesn’t remember any of that closeness after recovery and regards her service with appreciation.
Interestingly, none of this is depicted in flash backs or third acts but implied throughout in dialogues or fleeting glimpses that take special care in not revealing Dharmendra’s face to the camera, which (masterful cinematography by Filmfare award recipient Kamal Bose) renders him surreal, unattainable and, thus, even more significant to the developments.
What I just described to you isn’t even the story; it’s just the basis of why Radha’s fate in the final scene is all the more crushing.
What transpired with Dev, Radha — the devoted nurse took in her stride but left Radha — the woman in love shattered. But she keeps mum then. And again when asked why she’s reluctant to take up another similar case of acute mania — Arun (Rajesh Khanna), whose recuperation using the same method can verify Dr Colonel’s notion as not a fluke.
In one poignant scene, bearing the Gulzar (credited for dialogues/lyrics) stamp all over, Arun tells Radha, ‘Mujhe toh sirf shuruvat maloom hai kahani ki. Anth nahi.’ ‘Anth mujhe maloom hai, Arun babu,’ remarks Radha in a somewhat hushed tone that is both conscious of and dreading the looming déjà vu.
By restraining its running time to a little less than two hours duration, Sen ensures the romance drama, set against the backdrop of human psychology, doesn’t get too long drawn out or soppy.
He alternates the serious bits with comic interludes that portray disturbed minds as animated loonies. They don’t contribute in the actual screenplay nor fulfill their function of providing an authentic ambiance.
Having said that, Deven Varma stands out as a man who’s lost his mind over his allegedly starving kids. There’s a depressing backstory lurking in somewhere but Khamoshi has enough of that already on its platter to probe.
Another sub-plot is the one involving Radha’s tackling of the woman behind Arun’s insane behavior.
What redeems the clunky acting in this particular episode is Gulzar’s glorious penmanship and Lata Mangeshkar’s magnificent delivery of the song, Humne dekhi hai un aankhon ki mehakti khushboo, haath se chookar inhe rishton ka ilzaam na do, which plays more than a couple of times in the movie.
Music is the ethereal voice, the sublime soul of Radha’s Khamoshi.
Hemant Kumar, who also produced the film, composes a sound that is soothing, secluded and untouched.
Gulzar’s stirring poetry in Tum pukaar lo, Woh shaam kuch ajeeb thi, Humne dekhi hai and Aaj ki raat charagon blends seamlessly with former’s magical strains and offers its singers – Kishore Kumar, Lata Mangeshkar, Aarti Mukherjee and Hemant da himself, some of the best songs of their career.
Even the picturisation by Sen is particularly pleasing. Although most of Khamoshi unfolds in the dull, stark confines of a medical institution, it steps out in fresh air for the boat song, Woh shaam kuch ajeeb thi against the majestic Howrah Bridge.
Around the late 1960s, Rajesh Khanna was just a newbie while Rehman, at 33, one of Hindi cinema’s biggest stars. Unable to memorize the lyrics, he’d bank of his seasoned co-star to prompt him the lines. Turns out, Kaka has a quite a few hit boat songs to his credit – Chingari koi bhadke (Amar Prem), Jis gali mein tera ghar (Kati Patang) and Yunhi tum mujhse pyaar karti ho (Sachcha Jhutha).
While we never quite come face to face with Dharmendra, his impact is unprecedented. In comparison, Rajesh Khanna enjoys a full-fledged role, which he carries out with a mix of raw vulnerability and unconcealed awkwardness. His interpretation of acute mania may seem more like jilted lover syndrome to today’s viewer.
Being self-conscious of his senior heroine only helps in underscoring Rehman’s designation as the nurturer and Khanna’s as recipient.
Asit Sen was keen on Dev Anand for the role, Rehman, later revealed Sanjeev Kumar was better suited for the part. The latter, as a matter of fact, played a similar role in Khilona, which came out soon after Khamoshi.
Adapting a short story is often tricky because of its condensed format. On paper, the exclusion of detail only adds to the mystery of unexplained, open endings. On screen, this may or may not work. There’s a sense of urgency in the manner Sen wraps up the final fifteen minutes of Khamoshi. But the viewer who has invested so much emotion in the unfair treatment of selfless Radha doesn’t want to be rushed towards closure.
That’s where Waheeda Rehman’s competence comes into play. From her gradual, bit-by-bit breakdown, walking past those endless corridors, to her painful admission in its famous final scene, ‘Maine acting ki. Maine kabhi acting nahi ki. Main acting nahi kar sakti,’ Rehman leaves us truly stunned.
This article was first published on rediff.com.
|Saturday, November 30th, 2013|
|Electricity, Poop, and That Bastard Edison
San Francisco's Secret DC Grid:
Nikola Tesla's alternating current may have "won" the War of Currents at the end of the 19th Century, but the defeated incumbent -- direct-current distribution, aggressively championed by Thomas Edison -- endured. [...] Remnants of DC power distribution kept performing their assigned tasks for decades as the AC grid thickened around them.
In fact, a few live on to this day. One of the best examples is in San Francisco, where 250-volt DC power still flows through underground and overhead cables across the city. These DC lines peacefully coexist with their AC counterparts; you can see this mix of currents straddling utility poles in the city's South of Market district. DC's perseverance in that neighborhood seems fitting, for it was just a few blocks away that the tiny California Electric Light Co. -- a forebear to California's dominant Pacific Gas and Electric (PG&E) -- became the first power company in the United States, and possibly the world, to supply electricity to multiple customers from a central generating station. It was in September 1879 -- a full three years before Edison turned on his famous Pearl Street generating station in New York City -- that California Electric began burning coal, raising steam, and driving dynamos in a wooden shack at the corner of Fourth and Market streets to feed current to its customers' electric lights. [...]
DC endures in San Francisco because more than 900 of PG&E's customers still need it. Most of the utility's customers transitioned to AC lightbulbs and appliances easily enough as competing power distributors coalesced within PG&E and harmonized their equipment around AC. But for some of these building owners, however, elevators were a problem.
DC-driven winding-drum elevators -- the leading design until the 1930s -- use a DC motor in the basement that winds and unwinds the elevator's steel cable on a steel drum, thus lifting and lowering the car from pulleys atop the elevator shaft. DC drive was the only way to go at the time for a speedy elevator, because only DC could deliver variable-speed operation for smooth starts and stops. The DC motors were also energy efficient, capable of something that has only recently become possible with modern elevator designs: regenerating power when the elevator descends.
However, safety was a weak point. If a winding drum's control system fails, its motor can drive the elevator through the roof, according to San Francisco -- based elevator consultant Richard Blaska. As a result, says Blaska, new installation of winding-drum elevators was banned in the 1940s and 1950s in favor of traction elevators, whose cable will simply slip and hold the car at the top floor if the control system fails. Traction elevators can be engineered for either AC or DC operation.
Existing DC winding-drum elevators, however, have stubbornly resisted exile to the scrap heap, in no small part with support from local elevator repair firms such as Erik Bleyle's. Bleyle Elevator makes replacement parts, rebuilds DC motors, and designs custom circuits to sustain these machines from a bygone era. Bleyle admits that repairs can be pricey, especially hand-rewinding a DC motor, which can run between US $30 000 and $40 000. But he says even a refurbished motor looks cheap compared with the $500 000 cost of replacing the elevator, not to mention the months of involuntary stair climbing during the upgrade.
"Usually people just go for the motor," says Bleyle. [...]
The DC grid was also always difficult to troubleshoot because faults are hard to localize on a single large circuit -- a challenge that Austin says is compounded by the scant support this forgotten technology gets from equipment vendors. Austin adapted a circa-1990 AC/DC hammer drill to create his own diagnostic tool for so-called phantom voltage -- tiny dribbles of DC flowing across blown fuses that can hoodwink unsuspecting "troublemen" and their trusty voltmeters. Austin knows he's found a phantom when he clips his modified Black & Decker Macho III hammer drill onto a circuit, pulls its trigger, and gets a whimper instead of a roar. [...]
"When you had a failure out there like a fire in a manhole, the DC grid saw it as a load and just kept on pumping power at it," says Austin. The Tenderloin fire provided fuel for critics of PG&E's maintenance record and prompted the utility to accelerate and complete an ongoing redesign of its DC supply system. PG&E finished the job and shut down its two old rectifiers at the end of 2010.
I had no idea that this lunacy was all around us here! What the!
It reminds me of the fact that our fair city also has... unique... notions about how sewers should work. (After all, no discussion of electricity is complete without a plumbing analogy.) It's one of the few cities in the world that uses the same pipes for sewage and rain drainage. Doing it that way fell out of favor some time between the Romans and the London cholera die-offs in the 1800s.
San Francisco Combined Sewers:
San Francisco collects both sewage and stormwater in the same network of pipes, then treats and discharges the combined flows to San Francisco Bay or the Pacific Ocean. Except for portions of Old Sacramento, all other cities in California have separate sewer systems, which means there are two sets of pipes in the ground. One set of pipes takes sanitary waste to the treatment plant while a second set carries stormwater runoff from street drains directly into creeks, lakes, or the ocean. [...]
San Francisco's combined system holds these large volumes of water in underground storage vaults called transport/storage (T/S) structures, which encircle the city. San Francisco built the T/S structures in the 1980s and 1990s to prevent pollution of the bay and ocean during large storms. All combined flows pass into and through these structures on their way to the treatment plants. This upgrade greatly reduced the number of sewage overflows. The current system is designed such that overflows to the bay or ocean now occur on average one to ten times per year, depending on the rainfall and the watershed.
WHY A COMBINED SYSTEM? Many United States cities built prior to 1900 had combined sewer systems. At that time, sewage treatment was not available and sewers simply directed sewage into local water bodies. When sewage treatment became necessary to protect public health, newer cities built separate systems to save on the costs of treating stormwater. Some of the older cities opted to separate their combined systems. San Francisco, already a dense urban environment, decided that separation was too costly and disruptive to the residents.
Also: A Rare Look at the Tunnels Under San Francisco:
In the early '90s my friends and I used to tape flashlights to the handlebars of our bikes and go riding around in underground storm drain tunnels. There was a whole network of these tunnels under the city that sat empty for most of the year. We would go for miles snaking up and down the sides of the tubes, clapping and yelling to see how far our echoes would carry, eventually popping out in some other part of the city covered in cobwebs and bat guano. When the tubes got too small, we laid down on skateboards and kept going. If we found a flooded part, we taped garbage bags around our legs and crossed our fingers.
Previously, previously, previously, previously, previously, previously, previously, previously.
Mirrored from jwz.org.
|Friday, November 29th, 2013|
|Bitcoin Research in Princeton CS
Continuing our post series on ongoing research in computer security and privacy here at Princeton, today I’d like to survey some of our research on Bitcoin. Bitcoin is hot right now because of the recent run-up in its value. At the same time, Bitcoin is a fascinating example of how technology, economics, and social interactions fit together to create something of value.
Our Bitcoin work started with a paper by Josh Kroll, Ian Davey and me, about the dynamics and stability of the Bitcoin mining mechanism. There was a folk theorem that the Bitcoin system was stable, in the sense that if everyone acted according to their incentives, the inevitable result would be that everyone followed the rules of Bitcoin as written. We showed that this is not the case, that there are infinitely many outcomes that are stable yet differ from the written rules of Bitcoin. So the rule-following behavior that we currently see is at best stable in the weaker sense that if everyone else is following the rules (and no one mining entity has too much power) then deviating from the rules will cost you money.
Beyond this, we have built a better understanding of the “political economy” of Bitcoin—how the Bitcoin community governs itself to keep the system operating well, despite the lack of a central authority and despite the complicated issues around the theoretical stability of the protocol. The ultimate goal of this line of work is to understand how Bitcoin is likely to deal with challenges in the future, and whether there are feasible changes that could improve the governance of Bitcoin.
Since then, we have started several more Bitcoin-related projects. My faculty colleague Arvind Narayanan (who joined us last year) as well as several more students are working on Bitcoin, and the pace has accelerated. We’re building tools to track and diagnose the behavior of the peer-to-peer network that Bitcoin participants use to spread information about what is happening. We’re looking at the dynamics of mining pools, in which a group of miners cooperate to spread the risk inherent in the mining process. We’re considering new types of double-spending attacks and how participants can defend against them.
Let me highlight one current project: we’re designing a decentralized prediction market using the Bitcoin protocol. Prediction markets enable participants to trade “shares” on potentially any event with well-defined outcomes, such as a presidential election or sporting events. The market prices of these shares can be interpreted as the probability of the event occurring. Prediction markets offer societal benefits because of this ability to accurately aggregate the wisdom of crowds. Decentralization can improve prediction markets in various ways including robustness to closure (see Intrade), greater expressivity in defining markets and outcomes, and potentially lower fees leading to more accuracy in pricing unlikely events.
There are two main difficulties: first, how can a pair of anonymous participants trade shares without a trusted party to facilitate the transaction? Second, who will arbitrate the outcome of events? This is far trickier than it sounds—even for outcomes that are completely uncontroversial, some entity or group of entities must be entrusted with the authority to declare the outcome, and there must be checks to prevent them from abusing their power. It turns out that the contract-signing capability and the consensus mechanism of Bitcoin or a Bitcoin-like system enable us to find solutions to these problems, and that is the crux of our research. This is a collaboration between Princeton researchers and soon-to-be-CITP-fellow Joseph Bonneau, Jeremy Clark at Concordia, and Andrew Miller at UMD.
The analogy is often made that Bitcoin will do to money what the Internet did to communications. If that is the case, many, many interesting and useful designs that use Bitcoin as an underlying protocol are waiting to be discovered. It’s an exciting time to be doing research in this area.
|More on Stuxnet
Ralph Langer has written the definitive analysis of Stuxnet: short, popular version, and long, technical version.
Stuxnet is not really one weapon, but two. The vast majority of the attention has been paid to Stuxnet's smaller and simpler attack routine -- the one that changes the speeds of the rotors in a centrifuge, which is used to enrich uranium. But the second and "forgotten" routine is about an order of magnitude more complex and stealthy. It qualifies as a nightmare for those who understand industrial control system security. And strangely, this more sophisticated attack came first. The simpler, more familiar routine followed only years later -- and was discovered in comparatively short order.
Stuxnet also provided a useful blueprint to future attackers by highlighting the royal road to infiltration of hard targets. Rather than trying to infiltrate directly by crawling through 15 firewalls, three data diodes, and an intrusion detection system, the attackers acted indirectly by infecting soft targets with legitimate access to ground zero: contractors. However seriously these contractors took their cybersecurity, it certainly was not on par with the protections at the Natanz fuel-enrichment facility. Getting the malware on the contractors' mobile devices and USB sticks proved good enough, as sooner or later they physically carried those on-site and connected them to Natanz's most critical systems, unchallenged by any guards.
Any follow-up attacker will explore this infiltration method when thinking about hitting hard targets. The sober reality is that at a global scale, pretty much every single industrial or military facility that uses industrial control systems at some scale is dependent on its network of contractors, many of which are very good at narrowly defined engineering tasks, but lousy at cybersecurity. While experts in industrial control system security had discussed the insider threat for many years, insiders who unwittingly helped deploy a cyberweapon had been completely off the radar. Until Stuxnet.
And while Stuxnet was clearly the work of a nation-state -- requiring vast resources and considerable intelligence -- future attacks on industrial control and other so-called "cyber-physical" systems may not be. Stuxnet was particularly costly because of the attackers' self-imposed constraints. Damage was to be disguised as reliability problems. I estimate that well over 50 percent of Stuxnet's development cost went into efforts to hide the attack, with the bulk of that cost dedicated to the overpressure attack which represents the ultimate in disguise -- at the cost of having to build a fully-functional mockup IR-1 centrifuge cascade operating with real uranium hexafluoride. Stuxnet-inspired attackers will not necessarily place the same emphasis on disguise; they may want victims to know that they are under cyberattack and perhaps even want to publicly claim credit for it.
Related: earlier this month, Eugene Kaspersky said that Stuxnet also damaged a Russian nuclear power station and the International Space Station.
|Wednesday, November 27th, 2013|
Safeplug is an easy-to-use Tor appliance. I like that it can also act as a Tor exit node.
EDITED TO ADD: I know nothing about this appliance, nor do I endorse it. In fact, I would like it to be independently audited before we start trusting it. But it's a fascinating proof-of-concept of encapsulating security so that normal Internet users can use it.
|Tuesday, November 26th, 2013|
|The FBI Might Do More Domestic Surveillance than the NSA
This is a long article about the FBI's Data Intercept Technology Unit (DITU), which is basically its own internal NSA.
It carries out its own signals intelligence operations and is trying to collect huge amounts of email and Internet data from U.S. companies -- an operation that the NSA once conducted, was reprimanded for, and says it abandoned.
The unit works closely with the "big three" U.S. telecommunications companies -- AT&T, Verizon, and Sprint -- to ensure its ability to intercept the telephone and Internet communications of its domestic targets, as well as the NSA's ability to intercept electronic communications transiting through the United States on fiber-optic cables.
After Prism was disclosed in the Washington Post and the Guardian, some technology company executives claimed they knew nothing about a collection program run by the NSA. And that may have been true. The companies would likely have interacted only with officials from the DITU and others in the FBI and the Justice Department, said sources who have worked with the unit to implement surveillance orders.
Recently, the DITU has helped construct data-filtering software that the FBI wants telecom carriers and Internet service providers to install on their networks so that the government can collect large volumes of data about emails and Internet traffic.
The software, known as a port reader, makes copies of emails as they flow through a network. Then, in practically an instant, the port reader dissects them, removing only the metadata that has been approved by a court.
The FBI has built metadata collection systems before. In the late 1990s, it deployed the Carnivore system, which the DITU helped manage, to pull header information out of emails. But the FBI today is after much more than just traditional metadata -- who sent a message and who received it. The FBI wants as many as 13 individual fields of information, according to the industry representative. The data include the route a message took over a network, Internet protocol addresses, and port numbers, which are used to handle different kinds of incoming and outgoing communications. Those last two pieces of information can reveal where a computer is physically located -- perhaps along with its user -- as well as what types of applications and operating system it's running. That information could be useful for government hackers who want to install spyware on a suspect's computer -- a secret task that the DITU also helps carry out.
Some federal prosecutors have gone to court to compel port reader adoption, the industry representative said. If a company failed to comply with a court order, it could be held in contempt.
It's not clear how many companies have installed the port reader, but at least two firms are pushing back, arguing that because it captures an entire email, including content, the government needs a warrant to get the information. The government counters that the emails are only copied for a fraction of a second and that no content is passed along to the government, only metadata. The port reader is designed also to collect information about the size of communications packets and traffic flows, which can help analysts better understand how communications are moving on a network. It's unclear whether this data is considered metadata or content; it appears to fall within a legal gray zone, experts said.
The Operational Technology Division also specializes in so-called black-bag jobs to install surveillance equipment, as well as computer hacking, referred to on the website as "covert entry/search capability," which is carried out under law enforcement and intelligence warrants.
But having the DITU act as a conduit provides a useful public relations benefit: Technology companies can claim -- correctly -- that they do not provide any information about their customers directly to the NSA, because they give it to the DITU, which in turn passes it to the NSA.
There is an enormous amount of information in the article, which exposes yet another piece of the vast US government surveillance infrastructure. It's good to read that "at least two" companies are fighting at least a part of this. Any legislation aimed at restoring security and trust in US Internet companies needs to address the whole problem, and not just a piece of it.
|Monday, November 25th, 2013|
|Web measurement for fairness and transparency
[This is the first in a series of posts giving some examples of security-related research in the Princeton computer science department. We're actively recruiting top-notch students to enter our Ph.D. program, as well as postdocs and visiting scholars. We don't have enough bandwidth here on the blog to feature everything we do, so we'll be highlighting a few examples over the next couple of weeks.]
Everything we do on the web is tracked, profiled, and analyzed. But what do companies do with that information? To what extent do they use it in ways that benefit us, versus discriminatory ways? While many concerns have been raised, not much is known quantitatively. That’s why at Princeton we’re building an infrastructure to detect, measure and reverse engineer differential treatment of web users.
Let’s consider some examples. The “filter bubble” arises when algorithmic systems, such as Google search or the Facebook news feed, decide what information to show a user based on her past pattern of searches and clicks. The worry is that users will be fed reinforcing viewpoints and eventually be isolated in their own bubble. At the level of demographics, the seemingly fair principle of treating “similar” users similarly can lead to a deepening of existing disparities. Online ads have been shown to display racial bias, and online prices and deals have been shown to vary based on users’ personal attributes.
What all these and many more examples have in common is that they are ways of using personal information for differential or discriminatory treatment. In other words, there is a machine learning system that takes personal information as input and produces a decision as output (such as one search result versus another, or a higher price versus a lower price).
Some researchers have used manual or crowdsourcing techniques to look for such differences. While that’s a great start, our approach to reverse engineering emphasizes automation, scalability, generality and speed. To this end, we’re building autonomous agents, i.e., bots, that mimic real users. Bots with different “personas” (that vary on age, gender, affluence, location, interests, and many other attributes) browse the web, carry out searches, and so forth over a period of time. As they do so, they compare the search results, prices, ads, offers, emails, and other content they receive. A single extensible infrastructure with various plugins allows measuring different types of personalization or discrimination across different sites.
What excites me about this project is that the measurement platform draws heavily from diverse areas of computer science. We are using machine learning for building profiles of simulated users based on real user logs. Interpreting what we’re seeing behind the scenes requires developing automated reverse-engineering techniques that I will elaborate below. Finally, our long-term goal is to be able to run the tool on a web scale to publish a frequently-updated “census” of online privacy and discrimination. Successfully deploying such a platform is a significant systems research challenge. With this in mind, we have made our design highly modular so that different researchers can work on different parts of the infrastructure.
Graduate students Chris Eubank and Steve Englehardt have been working on this project, as well as a few undergraduates. CITP fellow Solon Barocas will be joining us shortly. We are gradually building the various components of the system from the ground up. Currently we’re seeing the first results from our platform, and it’s an exciting stage to be at. We are actively looking to grow our team with fresh graduate students.
One particular sub-goal that we’ve spent much of our efforts on is automated reverse engineering. There is encoded information about users that’s stored and transmitted via cookies and other mechanisms. Can we automatically “deobfuscate” this traffic to associate human-understandable semantics with it? For example, can we tell which values correspond to user IDs, interest segments, and other behavioral information? We are collaborating with researchers at KU Leuven on this project.
As a simple illustration of our techniques, the graph below shows a map of domains that synchronize cookies with advertising company AppNexus . Cookie synchronization is a protocol by which two different third-party trackers are able to match their respective pseudonymous IDs of the user to each other, amplifying the privacy-infringing effect of online tracking.
Several points of note: first, this analysis is significantly deeper than tools like lightbeam for Firefox, which only observes relationships between pairs of servers. Lightbeam cannot figure out the meaning of the data that is exchanged. On the other hand, we automate the detection of cookie synchronization — this is much harder and produces much more useful results. Second, we are working on the ability to infer even more nuanced attributes such as behavioral segments and parameters related to ad auctions. Third, we are doing this measurement on a web-scale rather than a personal tool for a single user. Our goal is a web privacy census which will be a comprehensive map of which entities are collecting what information, what they are inferring from it, and who they are sharing it with. It is an important step in our ultimate goal of figuring out how users are treated based on that information.
It is our hope that bringing transparency to the currently invisible collection and use of personal data online will lead to greater public awareness and a more informed debate on the merits and dangers of these practices. In the case of particularly inappropriate uses of personal data, our measurement infrastructure could aid regulatory action. At present, online trackers operate at an unacceptable level of obscurity. We view our transparency initiative as a key component of digital democracy, and invite you to join us.
 Specifically, the graph was constructed as follows. Cookie synchronization typically involves a first-party domain A embedding a third-party tracker B which redirects to another third-party tracker C. When we observe an instance of this in our web crawl data, we create a red edge from A to B and a grey edge from B to C.
|NSA Strategy 2012-16: Outsourcing Compliance to Algorithms, and What to Do About It
Over the weekend, two new NSA documents revealed a confident NSA SIGINT strategy for the coming years and a vast increase of NSA-malware infected networks across the globe. The excellent reporting overlooked one crucial development: constitutional compliance will increasingly be outsourced to algorithms. Meaningful oversight of intelligence practises must address this, or face collateral constitutional damage.
The New York Times revealed the NSA SIGINT strategy for 2012-2016, while Dutch daily NRC [English] provided more facts about the Boundless Informant program. Both reports have been re-reported and re-tweeted extensively, so I won’t waste your precious time repeating that the NSA thinks we live in a golden age of surveillance and reflects on mastering global communications, aggressively increasing legal authorities and how to further break encryption (probably HTTPS) – which again seems to work against dragnet surveillance. Or that the NSA has infected 50.000 networks around the world with malicious code that it can activate remotely, while seeking to expand to 85.000 networks anytime soon.
One aspect I haven’t seen in the media reports so far is highly relevant for the legislative proposals seeking to improve oversight on intelligence gathering. Consider these strategic objectives for 2012-16 [pdf]:
4.2. (U//FOUO) Build compliance into systems and tools to ensure the workforce operates within the law and without worry
5.2. (U//FOUO) Build into systems and tools, features that enable and automate end-to-end value-based assessment of SIGINT products and services
Compliance and value-assessment are to be outsourced to algorithms. For the NSA the way forward to surveillance ‘without worry’. Not for the rest of us.
The minimization procedures supposed to protect US citizens against bulk surveillance were based on a rather flakey assumption of 51% ‘foreignness’, as the NSA put it. Such algorithmic compliance probably got the go-ahead from the FISA court without proper inspection of the code, which may have resulted in mass spying on millions of Americans. The NSA held that its surveillance programs had been authorized by the Court, so why are people worrying?
Ed Felten wrote about software transparency before on this blog. That concept helps to think about the new kind of legal oversight needed for 21st century intelligence gathering. Technical experts need to inspect algorithmic compliance mechanisms, advise judges and technically vet their constitutional assessment. This is hard, and needs more thought, but a strong combination of technical and legal analysis is the only way to render oversight on intelligence practises and minimization procedures meaningful going forward.
I have argued before that surveillance based on nationality is not in the interest of Americans. Regardless of what Washington makes of that message, I haven’t seen the maxim of legal and technical oversight in any of the current legislative proposals to limit the intelligence reach of the NSA. Especially when the NSA delegates compliance to algorithms, failure to have a kind of software transparency for compliance equals near-certain collateral constitutional damage.
|Surveillance as a Business Model
Google recently announced that it would start including individual users' names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.
These changes come on the heels of Google's move to explore replacing tracking cookies with something that users have even less control over. Microsoft is doing something similar by developing its own tracking technology.
More generally, lots of companies are evading the "Do Not Track" rules, meant to give users a say in whether companies track them. Turns out the whole "Do Not Track" legislation has been a sham.
It shouldn't come as a surprise that big technology companies are tracking us on the Internet even more aggressively than before.
If these features don't sound particularly beneficial to you, it's because you're not the customer of any of these companies. You're the product, and you're being improved for their actual customers: their advertisers.
This is nothing new. For years, these sites and others have systematically improved their "product" by reducing user privacy. This excellent infographic, for example, illustrates how Facebook has done so over the years.
The "Do Not Track" law serves as a sterling example of how bad things are. When it was proposed, it was supposed to give users the right to demand that Internet companies not track them. Internet companies fought hard against the law, and when it was passed, they fought to ensure that it didn't have any benefit to users. Right now, complying is entirely voluntary, meaning that no Internet company has to follow the law. If a company does, because it wants the PR benefit of seeming to take user privacy seriously, it can still track its users.
Really: if you tell a "Do Not Track"-enabled company that you don't want to be tracked, it will stop showing you personalized ads. But your activity will be tracked -- and your personal information collected, sold and used -- just like everyone else's. It's best to think of it as a "track me in secret" law.
Of course, people don't think of it that way. Most people aren't fully aware of how much of their data is collected by these sites. And, as the "Do Not Track" story illustrates, Internet companies are doing their best to keep it that way.
The result is a world where our most intimate personal details are collected and stored. I used to say that Google has a more intimate picture of what I'm thinking of than my wife does. But that's not far enough: Google has a more intimate picture than I do. The company knows exactly what I am thinking about, how much I am thinking about it, and when I stop thinking about it: all from my Google searches. And it remembers all of that forever.
As the Edward Snowden revelations continue to expose the full extent of the National Security Agency's eavesdropping on the Internet, it has become increasingly obvious how much of that has been enabled by the corporate world's existing eavesdropping on the Internet.
The public/private surveillance partnership is fraying, but it's largely alive and well. The NSA didn't build its eavesdropping system from scratch; it got itself a copy of what the corporate world was already collecting.
There are a lot of reasons why Internet surveillance is so prevalent and pervasive.
One, users like free things, and don't realize how much value they're giving away to get it. We know that "free" is a special price that confuses peoples' thinking.
Google's 2013 third quarter profits were nearly $3 billion; that profit is the difference between how much our privacy is worth and the cost of the services we receive in exchange for it.
Two, Internet companies deliberately make privacy not salient. When you log onto Facebook, you don't think about how much personal information you're revealing to the company; you're chatting with your friends. When you wake up in the morning, you don't think about how you're going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.
And three, the Internet's winner-takes-all market means that privacy-preserving alternatives have trouble getting off the ground. How many of you know that there is a Google alternative called DuckDuckGo that doesn't track you? Or that you can use cut-out sites to anonymize your Google queries? I have opted out of Facebook, and I know it affects my social life.
There are two types of changes that need to happen in order to fix this. First, there's the market change. We need to become actual customers of these sites so we can use purchasing power to force them to take our privacy seriously. But that's not enough. Because of the market failures surrounding privacy, a second change is needed. We need government regulations that protect our privacy by limiting what these sites can do with our data.
Surveillance is the business model of the Internet -- Al Gore recently called it a "stalker economy.: All major websites run on advertising, and the more personal and targeted that advertising is, the more revenue the site gets for it. As long as we users remain the product, there is minimal incentive for these companies to provide any real privacy.
This essay previously appeared on CNN.com.
|Sunday, November 24th, 2013|
|Thursday, November 21st, 2013|
|Improve Connectivity in Rural Communities – Principle #9 for Fostering Civic Engagement Through Digi
In my recent blog posts, I have been discussing ways that citizens can communicate with government officials through the Internet, social media, and wireless technology to solve problems in their communities and to effect public policy. Using technology for civic engagement, however, should not be limited to communications with elected or appointed government officials. One of the themes I have sought to address across my series of posts – and will discuss in more detail today – is that citizen-to-citizen communication through digital technologies for civic purposes is extremely important in building healthy communities. This is particularly true in rural areas. Improving digital connectivity in rural areas will help people communicate more effectively with civic institutions, such as schools and libraries, and commercial entities, such as commodities markets, that effect residents daily lives and economic well-being.
Earlier this year I met with Tom Koutsky, Chief Policy Counsel for Connected Nation, a non-profit working to “accelerate broadband availability in underserved areas and increase broadband use in all areas.” When I told Mr. Koutsky that I wanted to learn more about the role of digital technologies in fostering civic engagement in rural areas, he told me that “you can’t develop one size fits all for non-urban areas. Not all rural communities have the same challenges, even if they are clearly different from urban areas.”
For example, Connected Nation evaluated several South Carolina counties and found that overall the three main challenges were access (i.e. the number of providers), adoption (encompassing digital literacy and computer training) and the lack of telecommunications infrastructure linking interested industry or government users to the network. The combinations of those challenges, however, varied by county. In Saluda County, Connected Nation found a lack of infrastructure and industry, but solid training programs. In Greenwood County, backhaul infrastructure was in place, but not enough providers were operating.
There are a wide variety of organizations seeking to increase broadband adoption in rural areas and the engagement in civic and economic life that follows, including the American Farm Bureau, Microsoft, the United States Cattlemen’s Association, the Bill & Melinda Gates Foundation, the Communications Workers of America, and many telecommunications providers serving rural areas. Mr. Koutsky told me that these organizations focus on building “one-to-many” relationships by forming partnerships between “anchor systems” – valued organizations with built-in constituencies in local communities – and the key civic institutions in the community, for instance, libraries and school systems. One project Mr. Koutsky mentioned, for example, is a statewide digital literacy effort with fifteen to twenty Boys Clubs in Tennessee. In addition, churches in rural areas are opening technology centers and community centers are hosting job-training programs, designed to teach adults digital literacy.
Individuals and organizations working to improve broadband adoption are going directly to citizens because the governmental entities supporting broadband adoption vary greatly from state to state and can be difficult for citizens to identify on their own. For example, broadband adoption programs are supported through Connect Texas, which resides in the Texas Department of Agriculture and its Rural Affairs team. The State Librarian is the leader on broadband policy for Nevada’s seventeen counties. In Michigan, Connected Nation works with the state’s Public Service Commission.
Connected Nation’s approach to assisting rural communities in identifying solutions that allow them to use broadband to spur innovation and civic participation is similar to the process that I discussed in an earlier post regarding Memphis’s efforts to revitalize several of its neighborhoods. Connected Nation evaluates the strengths and weaknesses of rural communities by sending local planning teams to build data-driven profiles of each area. This approach allows Connected Nation to tailor broadband adoption solutions to specific communities and community leaders to measure their areas according to a common framework – the National Broadband Plan.
Connected Nation’s community facilitators help community leaders share information about their needs and make plans as to how broadband can be deployed around the civic and economic drivers in the community such as agricultural facilities, industrial plants, schools, hospitals, government buildings, and tourist attractions. Once community planners have, for example, mapping data on where broadband providers are already operating in the community, local governments can make better-informed decisions about where economic development such as a new subdivision or hospital should be located. Where, for example, additional wireless infrastructure is needed, the various stakeholders such as the water company with a tall water tower, the wireless provider seeking a site for an antenna, the school superintendent researching potential locations for a new school, and farmers whose equipment relies on GPS, have an opportunity to discuss their high speed Internet needs and work cooperatively. In addition, telecommunications providers will learn where the opportunities are for potential future build-out and where their competitors are investing in facilities. Mr. Koutsky suggested that this type of information will “allow the market to shape itself.”
What happens in a rural community with affordable access to wireline and wireless broadband? Its residents in the agricultural industry use modern, self-guided farm equipment, such as tractors that are dependent on GPS systems and satellites. Farmers have the ability to track commodities prices and conduct transactions through wireline broadband connections and through wireless devices. Parents and teachers can communicate about a child’s education through e-mail. Students can bring their own devices back and forth from school and use the Internet to bridge the gap between class and home. Over the long-term, this type of access will be critical to the survival of small communities as they are better able to compete with urban areas to keep their own young people and attract investment and new residents from elsewhere.