Last minute geek

last minute tech news from around the net

Tuesday, Feb 20th

Last update01:00:00 AM

You are here: English CircleID

CircleID

US Congress Considering Legislation to Authorize Faster Access to International Electronic Data

A legislation called, Clarifying Lawful Overseas Use of Data Act, or Cloud Act, was introduced on Monday by Congress aimed at creating a clearer framework for law enforcement to access data stored in cloud computing systems. Ali Breland reporting in The Hill: "[The] bill is aimed at making it easier for U.S. officials to create bilateral data sharing agreements that allow them to access data stored overseas and also for foreign law enforcement to access data stored on U.S. firms' servers. ... Federal law currently doesn't specify whether the government can demand that U.S. companies give it data they have stored abroad. The CLOUD Act would amend this, likely impacting Microsoft's pending Supreme Court case over data it has stored in Ireland."

Follow CircleID on Twitter

More under: Cloud Computing, Data Center, Law


Read all

U.S. Lawmakers Moving to Consider New Rules Imposing Stricter Federal Oversight on Cryptocurrencies

Reuters reports today that several top lawmakers have revealed a "bipartisan momentum is growing in the Senate and House of Representatives for action to address the risks posed by virtual currencies to investors and the financial system." David Morgan
reports: "Even free-market Republican conservatives, normally wary of government red tape, said regulation could be needed if cryptocurrencies threaten the U.S. economy. ... Much of the concern on Capitol Hill is focused on speculative trading and investing in cryptocurrencies, leading some lawmakers to push for digital assets to be regulated as securities and subject to the SEC’s investor protection rules."

Follow CircleID on Twitter

More under: Blockchain, Law, Policy & Regulation


Read all

SpaceX Launching Two Experimental Internet Satellites This Weekend

On Saturday, SpaceX will be launching two experimental mini-satellites that will pave the path for the first batch of what is planned to be a 4,000-satellite constellation providing low-cost internet around the earth. George Dvorsky reporting in Gizmodo: "Announced back in 2015, Starlink is designed to be a massive, space-based telecommunications network consisting of thousands of interlinked satellites and several geographically dispersed ground stations. ... The plan is to have a global internet service in place by the mid-2020s, and get a leg-up on potential competitors. ... Two prototypes, named Microsat 2a and 2b, are now packed and ready for launch atop a Falcon-9 v1.2 rocket."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless


Read all

A Brooklyn Bitcoin Mining Operation is Causing Interference to T-Mobile's Broadband Network

AntMiner S5 Bitcoin Miner by Bitmain released in 2014. S5 has since been surpassed by newer models.The Federal Communications Commission on Thursday sent a letter to an individual in Brooklyn, New York, alleging that a device in the individual's residence used to mine Bitcoin is generating spurious radiofrequency emissions, causing interference to a portion of T-Mobile's mobile telephone and broadband network. The letter states the FCC received a complaint from T-Mobile concerning interference to its 700 MHz LTE network in Brooklyn, New York. In response to the complaint, agents from the Enforcement Bureau's New York Office confirmed by using direction finding techniques that radio emissions in the 700 MHz band were, in fact, emanating from the user's residence in Brooklyn. "When the interfering device was turned off the interference ceased. ... The device was generating spurious emissions on frequencies assigned to T-Mobile's broadband network and causing harmful interference." FCC's warning letter further states that user's "Antminer s5 Bitcoin Miner" operation constitutes a violation of the Federal laws and could subject the operator to severe penalties including substantial monetary fines and arrest.

Jessica Rosenworcel, FCC Commissioner, in a tweet said: "Okay, this @FCC letter has it all: #bitcoin mining, computing power needed for #blockchain computation and #wireless #broadband interference. It all seems so very 2018."

Follow CircleID on Twitter

More under: Access Providers, Blockchain, Broadband, Telecom, Wireless


Read all

Hackers Earned Over $100K in 20 Days Through Hack the Air Force 2.0

The participating U.S. Airmen and hackers at the conclusion of h1-212 in New York City on Dec 9, 2017

HackerOne has announced the results of the second Hack the Air Force bug bounty challenge which invited trusted hackers from all over the world to participate in its second bug bounty challenge in less than a year. The 20-day bug bounty challenge was the most inclusive government program to-date, with 26 countries invited to participate. From the report: "Hack the Air Force 2.0 is part of the Department of Defense's (DoD) Hack the Pentagon crowd-sourced security initiative. Twenty-seven trusted hackers successfully participated in the Hack the Air Force bug bounty challenge — reporting 106 valid vulnerabilities and earning $103,883. Hackers from the U.S., Canada, United Kingdom, Sweden, Netherlands, Belgium, and Latvia participated in the challenge. The Air Force awarded hackers the highest single bounty award of any Federal program to-date, $12,500."

Follow CircleID on Twitter

More under: Cybersecurity


Read all

WHOIS Inaccuracy Could Mean Noncompliance with GDPR

The European Commission recently released technical input on ICANN's proposed GDPR-compliant WHOIS models that underscores the GDPR's "Accuracy" principle — making clear that reasonable steps should be taken to ensure the accuracy of any personal data obtained for WHOIS databases and that ICANN should be sure to incorporate this requirement in whatever model it adopts. Contracted parties concerned with GDPR compliance should take note.

According to Article 5 of the regulation, personal data shall be "accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay." This standard is critical for maintaining properly functioning WHOIS databases and would be a significant improvement over today's insufficient standard of WHOIS accuracy. Indeed, European Union-based country code TLDs require rigorous validation and verification, much more in line with GDPR requirements — a standard to strive for.

The stage is set for an upgrade to WHOIS accuracy: ICANN's current approach to WHOIS accuracy simply does not comply with GDPR. Any model selected by ICANN to comply with GDPR must be accompanied by new processes to validate and verify the contact information contained in the WHOIS database. Unfortunately, the current Registrar Accreditation Agreement, which includes detailed provisions requiring registrars to validate and verify registrant data, does not go far enough to meet these requirements.

At a minimum, ICANN should expedite the implementation of cross-field validation as required by the 2013 RAA, but to date has not been enforced. These activities should be supplemented by examining other forms of validation, building on ICANN's experience in developing the WHOIS Accuracy Reporting System (ARS), which examines accuracy of contact information from the perspective of syntactical and operational validity. Also, validation and accuracy of WHOIS data has been a long-discussed matter within the ICANN community — with the 2014 Final Report from the Expert Working Group on gTLD Directory Services: A Next-Generation Registration Directory Service (RDS) devoting an entire chapter to "Improving Data Quality" with a recommendation for more robust validation of registrant data. And, not insignificantly, ICANN already has investigated and deployed validation systems in its operations, including those in use by its Compliance department to investigate accuracy complaints.

Despite its significance to the protection and usefulness of WHOIS data, the accuracy principle is surprisingly absent from the three WHOIS models presented by ICANN for discussion among relevant stakeholders. Regardless of which model is ultimately selected, the accuracy principle must be applied to any WHOIS data processing activity in a manner that addresses GDPR compliance — both at inception, when a domain is registered, and later, when data is out of date.

All stakeholders can agree that WHOIS data is a valuable resource for industry, public services, researchers, and individual Internet users. Aside from the GDPR "Accuracy" principle, taking steps to protect the confidentiality of this resource would be meaningless if the data itself were not accurate or complete.

Written by Fabricio Vayra, Partner at Perkins Coie LLP

Follow CircleID on Twitter

More under: Domain Names, ICANN, Privacy, Whois


Read all

Who Will Crack Cloud Application Access SLAs?

The chart below ought to be in every basic undergraduate textbook on packet networking and distributed computing. That it is absent says much about our technical maturity level as an industry. But before we look at what it means, let's go back to some basics.

When you deliver a utility service like water or gas, there's a unit for metering its supply. The electricity wattage consumed by a room is the sum of the wattage of the individual appliances. The house consumption is the sum of the rooms, the neighbourhood is the sum of the houses, and so on. Likewise, we can add up the demand for water, using litres.

These resource units "add up" in a meaningful way. We can express a service level agreement (SLA) for utility service delivery in that standard unit in an unambiguous way. This allows us to agree both the final end-user delivery, as well as to contract supply at any administrative or management boundaries in the delivery network.

What's really weird about the broadband industry is that we've not yet got a standard metric of supply and demand that "adds up." What's even more peculiar is that people don't even seem to be aware of its absence, or feel the urge to look for one. What's absolutely bizarre is that it's hard to get people interested even when you do finally find a really good one!

Picking the right "unit" is hard because telecoms is different to power and water in a crucial way. With these physical utilities, we want more of something valuable. Broadband is an information utility, where we want less of something unwanted: latency (and in extremis, loss). That is a tricky conceptual about-turn.

So we're selling the absence of something, not its presence. It's kind of asking "how much network latency mess-up can we deal with in order to deliver a tolerable level of application QoE screw-up”. Ideally, we'd like zero "mess-up" and "screw-up," but that's not on offer. And no, I don't expect ISPs to begin advertising "a bit less screwed-up than the competition" anytime soon to consumers!

The above chart breaks down the latency into its independent constituent parts. What it says is:

  • For any network (sub-)path, the latency comprises (G)eographic, packet (S)ize, and (V)ariable contention delay — the "vertical" (de)composition.
  • Along the "horizontal" path the "Gs", "Ss", and "Vs" all "add up". (They are probabilities, not simple scalars, but it's still just ordinary algebra.)
  • You can "add up" the complete end-to-end path "mess-up" by breaking each sub-path "mess-up" into G, S and V; then adding the Gs, Ss, and Vs "horizontally"; and then "vertically" recombining their "total mess-up" (again, all using probability functions to reflect we are dealing with randomness).

And that's it! We've now got a mathematics of latency which "adds up", just like wattage or litres. It's not proprietary, nobody holds a patent on it, everyone can use it. Any network equipment or monitoring enterprise with a standard level of competence can implement it as their network resource model. It's all documented in the open.

This may all seem a bit like science arcana, but it has real business implications. Adjust your retirement portfolio accordingly! Because it's really handy to have a collection of network SLAs that "add up" to a working telco service or SaaS application. In order to do that, you need to express them in a unit that "adds up".

In theory, big telcos are involved in a "digital transformation" from commodity "pipes" into cloud service integration companies. With the occasional honourable exception (you know who you are!), there doesn't seem to be much appetite for engaging with fundamental science and engineering. Most major telcos are technological husks that do vendor contract management, spectrum hoarding, and regulatory guerrilla warfare, with a bit of football marketing on the side.

In contrast, the giant cloud companies (like Amazon and Google) are thronged with PhDs thinking about flow efficiency, trade-offs and protocols, and how to globally optimise the whole data centre to user device system. They also commonly own the environment that delivers the user experience (smart TV, smartphone, tablet, etc.) Plus there's the hyper-distribution capability of app stores to reach all endpoints very quickly. So they are positioned well to drive an application-centric model.

There are big cost savings and quality of experience gains to be had by adopting "standard" metrics and "composable" SLAs. (Try delivering electricity or water without standardised units to see why!) For newer distributed applications, you can't deliver them at all without adopting "proper" engineering and rigorous science: a rocket isn't just a scaled-up firework. So whoever masters this very basic idea of a unit that "adds up" is in a better position to economically command the value chain.

The strategic questions are these:

  • Will telcos "get it" and take over the supply chain from the "inside, outwards"? Or will cloud companies "get it" and invade telecoms from the "outside, inwards"?
  • How will the profit pool get re-divided as a result? This is a bit like how things shifted between handsets and networks when power transitioned from Nokia to Apple.

My bet is that the answer is the "outside-in" case: whoever captures the end user experience using metrics that "add up" is in a position to then contract and command the rest of the supply chain to do its will. Telcos will not auto-transform; they will be forcibly transformed. The (enterprise and cloud) connectivity "buy side" has the incentives to tighten up the SLAs on offer; the "sell side" mostly seems pretty content with the status quo.

It is a bit like in the 1990s when there was a big debate about how best to deliver mobile coverage through building walls. In the battle between macrocells vs. microcells, "outside-in always wins." You don't try to cover outdoors from indoors; you do try to cover indoors from outdoors. Indeed, everything was configured to meet the most "outdoors" condition. We call them "mobile" networks for a reason!

So are "cloud SLA networks" the "new mobile networks"? We will find out! I think so. You can tell who really "gets it" by who adopts a "unit" of supply and demand that properly "adds up." This is the essential prerequisite for a new "digital supply chain management" industry to emerge. Because at the end of the day, if you can't "add up" your cloud application demand, and build a matching network supply SLA, then that's a big strategic minus.

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Access Providers, Broadband, Cloud Computing, Telecom


Read all