Author: admin

SocialDrift Review – What I learned from using Instagram automation

There’s a lot out there about Instagram automation right now. In particular, there’s been mixed opinions about if automation tools are worth your time and money. Some discourage its use entirely, while others claim that they’re essential to any brand’s success.

After looking around and doing some research, I decided to check it out for myself. For the last two and a half months I’ve been using SocialDrift to see what automation is like for myself.

Here’s what I discovered through my experience with it:

1. Stick with one tool at a time.

A common mistake that people make is that they give multiple automation services access to their Instagram account. They assume that more tools running at once will result in even more growth.

Contrary to this line of reasoning, that’s just not the case. In fact, it will likely do more harm than good.

The dilemma here is that Instagram will find this sort of activity suspicious. The app is built to monitor your activity and make sure that it stays within sensible human capacities. If you are giving out too many likes and comments per minute, there is a chance that you’ll get banned.

Most automation tools are built to obey these restrictions. For instance, SocialDrift uses SecureBoost to prevent your account from going outside the limits. So you’re better off trusting them to do their job on their own, rather than trying to stack several of them and pushing it to the brink.

Thankfully I learned about this prior to getting started, so everything went smoothly. Always remember to dig around and investigate before you try anything!

2. Always go over the tutorial.

Sometimes it’s hard to resist the temptation of jumping in and learning on the fly. I know I’m certainly bad about this. Patience has never been my strongest suit. However, this is an instance where I would strongly advocate for it.

That’s not to say that SocialDrift isn’t intuitive. Its layout is actually very clear and easy to understand. Yet, I still think there’s a lot to be gained from getting a quick primer about the controls. The same goes for any other automation tool.

Many tutorials give you handy tips and suggestions that you might not have considered. They’re also helpful to refer back to when you forget how something is supposed to work.

3. Define your desired audience.

Is your automation tool not working as effectively as you had hoped? One reason for this issue might be that you haven’t set your targeting filters to be specific enough.

The first thing you need to do is narrow down which demographics you are trying to reach. Forming a clear, distinct profile of your intended audience will make it easier to tailor your interactions to their needs and interests.

One approach that is severely underutilized is engaging with smaller accounts. This includes users whose posts don’t tend to get a lot of activity. They are most likely to appreciate your recognition and return the favor. With SocialDrift, you can set a minimum and maximum amount of likes for the content that you interact with.

If you run a local business with a physical location, it’d be wise to target users within the nearby area. SocialDrift has a feature called Place Targeting that can aid you with that. It also has options to interact with users that use certain hashtags or follow certain accounts.

4. Use the Unfollow option strategically.

Some might wonder why you would ever need to unfollow others. The more the merrier, right?

The problem is that Instagram doesn’t let you follow any more than 7,500 accounts. The only way you can follow more people than that is if you did it before the limits were implemented.

So, it’s sometimes necessary to prune your following numbers. Generally you want to cut it down to about half the size of your followers once you’ve gained a decent amount. This will make your account seem more unique and popular.

SocialDrift has an automatic unfollowing feature that can be found at the top of the page in the Activity Types box. Initially I was skeptical about using it, as I was concerned it’d start unfollowing too many people, but it does a great job at being selective and gradual.

5. Customize and change up your automated comments.

Just about every automation tool offers the ability to send comments on your behalf. It cuts down on the tedious and time-consuming work that it would take to comment on hundreds of posts every day.

SocialDrift’s approach is to let you create these comments, which it will use in rotation. I would recommend writing at least a dozen of them. A greater variety of comments will make your activity seem more organic and less generic. Make sure to add new ones periodically, and replace old ones that have been used for awhile.

6. Continue answering questions and talking to your followers.

While automated comments are extremely useful, they shouldn’t completely replace all interaction with your audience. You should still address any concerns or queries they might have, and strike up friendly conversations whenever you have the chance. Show them that there’s still humans running the account, ready and willing to pay attention to them.

7. Optimize your settings.

Here’s an embarrassing admission: when I started my first month of SocialDrift, I mostly ignored the button labeled Optimization above the Account Settings. I took a cursory glance at it, of course, but for some reason I dismissed it as extraneous and didn’t return for awhile. This is something that I regret immense now.

The page features an evolving list of recommended settings adjustments for your account. Its purpose is to help you generate the most optimal results possible. The list is also color coded and assorted based on urgency, with red being the most important.

Taking heed of optimization advice could make a big difference and noticeably increase your growth. That was certainly the case for me, and I only wish I had taken it seriously sooner.

8. Focus on delivering high-quality content.

The greatest benefit from using automation is that it gives you the freedom to put more time into developing your content. While automation tools undoubtedly will bring more people to your page, it’s your content that will make them stick around and follow you.

SocialDrift has proven invaluable in this regard. Thanks to it’s assistance, I feel like I’m at the top of my Instagram game now.

Source link

ZombieChain Comes Alive: Can Ethereum Sidechains Save the Dapps?

That decision will cost you half a cent. Are you sure that’s the right move?

If you’re a gamer, decentralized applications (dapps) hold an enticing promise: you might finally be able to truly own virtual in-game items and accumulate them without worrying about a company changing the rules and taking them away. But as with other big blockchain ideas, that’s not quite a reality today.

One reason is the economics of how this would work are uncertain. To commit an action to the ethereum blockchain, users need to expend gas, a unit of value that’s priced in ether, the network’s cryptocurrency, and that fluctuates based on how much other people are using the network at any given time.

For Loom Network, a startup specializing in applying blockchain technology to gaming dapps, that just won’t do. Constant microtransactions harm user experience, even if network traffic isn’t pushing up gas prices at a given moment, as happened during the recent CryptoKitties boom.

Loom co-founder James Duffy told CoinDesk in a recent interview, “there’s just a mental transaction cost.”

He continued:

“Even if you’re spending a fraction of a penny every time you move your character, people still have to make decisions about whether it’s worthwhile to make a move [when] they know every single thing they’re doing is costing them.”

With that problem in mind, Duffy announced Loom’s newest offering – a ready-made “shared sidechain” that dapp developers can use in exchange for a monthly fee – this week. ZombieChain, as it’s called, is expected to launch in a month or two.

So far, not developers have signed up to build dapps on it, but the Loom team is excited about how it advances their ideas and vision.

“ZombieChain’s model more closely parallels traditional web hosting,” Duffy wrote in the announcement, “where developers pay a flat monthly fee based on the resources consumed by their application, upgrading their web server and paying more as their app grows in popularity over time.”

The idea of a shared sidechain, Duffy believes, has the potential to help gaming dapps achieve scale while making life easier for users and developers alike.

The alternatives, as the stand today, are: one, to house games on ethereum’s main chain, with its poor user experience; or two, to build a dedicated sidechain for each game.

“Not everyone wants to do that,” Duffy told CoinDesk – hence ZombieChain has come to life.

Sidestepping scalability

Broadly, sidechains have a long pedigree in cryptocurrencies, going back to Adam Back and other developers’ 2014 proposal for bitcoin “pegged sidechains.”

The idea is to complete transactions on smaller, nimbler chains that are later reconciled to the main blockchain – ethereum, in Loom’s case. Sidechain users sacrifice some of the security and decentralization of the main chain, since they depend on a smaller number of “validators” – analogous to miners – to register their transactions.

But they gain in terms of throughput, that is, the time it takes to complete transactions.

Loom Network took this idea and introduced the concept of “application-specific sidechains” or “dappchains.” Using Loom’s software development kit (SDK), developers can build a dedicated sidechain to house their dapp, with ethereum serving as a secure, decentralized base layer.

Loom has already built DelegateCall, a kind of decentralized Stack Exchange, on a dappchain. In addition, two games are under development in-house, according to Duffy: one he compares to Magic: the Gathering, the other to Pokemon. The user experience, he says, is like any mobile game: “fully immersive, graphics – you actually wouldn’t really know that it’s running on a dappchain.”

As the company’s head of business development Michael Cullinan told CoinDesk in March, the Loom developer platform aims “to make it simple to make highly-scalable apps on the blockchain.”

However, the company’s since found that not every project wants its own dappchain – at least not in the beginning. Developers would have to set up validators to act as the nexus between the sidechain and the ethereum blockchain. Then, in order to achieve decentralization, they would have to incentivize users – if they had users – to act as validators themselves.

Many early-stage projects were looking for a simpler solution, so Loom came up with the idea of a shared dappchain. Duffy told CoinDesk: “this way, when someone launches a new application they don’t know how popular it’s going to be, so they can start on kind of a shared hosting plan.”

If the game does take off, the developers can “fork it and run it on its own dappchain.” Eventually, Duffy says, Loom may roll out multiple shared chains for different use cases: a games chain and a social media chain, for example.

The monthly fees the developers pay will depend on the cost of committing their users’ data to ethereum. How developers collect money from users is up to them: donations are one possibility, as are monthly charges through a smart contract.

Reckoning with the trilemma

Designing decentralized networks involves tradeoffs, and sidechains are no exception. Ethereum founder Vitalik Buterin described these tradeoffs as a trilemma, in which three different priorities are in tension: decentralization, security, and scalability.

Duffy recognizes this fact, and argues that ZombieChain is a kind of “middle ground.”

First, it’s important to note that Loom Network’s focus is on applications that need high levels of throughput: decentralized games and social networks. And Duffy argues that these use cases “don’t really need that high level of decentralization that you need on ethereum.”

On a decentralized social network, he says:

“Someone’s not going to pay millions of dollars to attack the network to censor someone else’s tweet.”

For that reason, Loom Network has opted to base its sidechains – including ZombieChain – on delegated proof of stake (DPoS), a consensus algorithm in which the network elects “validators” to serve in place of miners. How many validators is up to the developer: the higher the number, the slower – but more decentralized – the network.

As for the shared ZombieChain, Duffy says the number of validators hasn’t been decided. He notes, though, that “in the beginning, it’s fully centralized because we’re running all the validators. Then in the future we want to open it up to let other people run validators.”

To be clear, that’s the case with any new sidechain: until a user base develops, and some of those users are willing to serve as validators, the chain is centralized in the hands of its creator.

Down the line, therefore, ZombieChain can actually help to ensure that new projects to some degree decentralized and scalable form the outset. Rather than setting up on the slow and costly ethereum mainnet, or spinning up a new centralized dappchain, they can join ZombieChain.

Even projects that are already set up on mainnet, says Duffy, “could very easily port that same application to ZombieChain,” adding:

“It would reduce the cost significantly and also let them have a more fluid user experience.”

As for the third leg of the trilemma, security, Duffy does not appear to be worried. “It’s really important to have that decentralized base layer of ethereum,” he says, “because then you can use it like the high court.

The mechanism for doing that, he continues, is plasma cash, which allows users to store valuable data – ether, for example – on the main blockchain, while still being able to trade it on the sidechain.

“If the sidechain did something dishonest,” he says, “you could contest it on mainnet and you would be able to withdraw your assets back to mainnet.”

For now, ZombieChain is just an idea, but it has the potential to allow new projects to deploy their dapps without sacrificing too much in terms of either scalability or decentralization.

Game image via Medium

The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.

Source link

The well-funded startups driven to own the autonomous vehicle stack – TechCrunch

At some point in the future, while riding along in a car, a kid may ask their parent about a distant time in the past when people used steering wheels and pedals to control an automobile. Of course, the full realization of the “auto” part of the word — in the form of fully autonomous automobiles — is a long way off, but there are nonetheless companies trying to build that future today.

However, changing the face of transportation is a costly business, one that typically requires corporate backing or a lot of venture funding to realize such an ambitious goal. A recent funding round, some $128 million raised in a Series A round by Shenzhen-based, got us at Crunchbase News asking a question: Just how many independent, well-funded autonomous vehicles startups are out there?

In short, not as many as you’d think. To investigate further, we took a look at the set of independent companies in Crunchbase’s “autonomous vehicle” category that have raised $50 million or more in venture funding. After a little bit of hand filtering, we found that the companies mostly shook out into two broad categories: those working on sensor technologies, which are integral to any self-driving system, and more “full-stack” hardware and software companies, which incorporate sensors, machine-learned software models and control mechanics into more integrated autonomous systems.

Full-stack self-driving vehicle companies

Let’s start with full-stack companies first. The table below shows the set of independent full-stack autonomous vehicle companies operating in the market today, as well as their focus areas, headquarter’s location and the total amount of venture funding raised:

Note the breakdown in focus area between the companies listed above. In general, these companies are focused on building more generalized technology platforms — perhaps to sell or license to major automakers in the future — whereas others intend to own not just the autonomous car technology, but deploy it in a fleet of on-demand taxi and other transportation services.

Making the eyes and ears of autonomous vehicles

On the sensor side, there is also a trend, one that’s decidedly more concentrated on one area of focus, as you’ll be able to discern from the table below:

Some of the most well-funded startups in the sensing field are developing light detection and ranging (LiDAR) technologies, which basically serve as the depth-perceiving “eyes” of autonomous vehicle systems. CYNGN integrates a number of different sensors, LiDAR included, into its hardware arrays and software tools, which is one heck of a pivot for the mobile phone OS-maker formerly known as Cyanogen.

But there are other problem spaces for these sensor companies, including Nauto’s smart dashcam, which gathers location data and detects distracted driving, or Autotalks’s DSRC technology for vehicle-to-vehicle communication. (Back in April, Crunchbase News covered the $5 million Series A round closed by Comma, which released an open-source dashcam app.)

And unlike some of the full-stack providers mentioned earlier, many of these sensor companies have established vendor relationships with the automotive industry. Quanergy Systems, for example, counts components giant Delphi, luxury carmakers Jaguar and Mercedes-Benz and automakers like Hyundai and Renault-Nissan as partners and investorsInnoviz supplies its solid-state LiDAR technology to the BMW Group, according to its website.

Although radar and even LiDAR are old hat by now, there continues to be innovation in sensors. According to a profile of Oryx Vision’s technology in IEEE Spectrum, its “coherent optical radar” system is kind of like a hybrid of radar and LiDAR technology in that “it uses a laser to illuminate the road ahead [with infrared light], but like a radar it treats the reflected signal as a wave rather than a particle.” Its technology is able to deliver higher-resolution sensing over a longer distance than traditional radar or newer LiDAR technologies.

Can startups stack up against big corporate competitors?

There are plenty of autonomous vehicle initiatives backed by deep corporate pockets. There’s Waymo, a subsidiary of Alphabet, which is subsidized by the huge amount of search profit flung off by Google . Uber has an autonomous vehicles initiative too, although it has encountered a whole host of legal and safety issues, including holding the unfortunate distinction of being the first to kill a pedestrian earlier this year.

Tesla, too, has invested considerable resources into developing assistive technologies for its vehicles, but it too has encountered some roadblocks as its head of Autopilot (its in-house autonomy solution) left in April. The company also deals with a rash of safety concerns of its own. And although Apple’s self-driving car program has been less publicized than others, it continues to roll on in the background. Chinese companies like Baidu and Didi Chuxing have also launched fill-stack R&D facilities in Silicon Valley.

Traditional automakers have also jumped into the fray. Back in 2016, for the price of a cool $1 billion, General Motors folded Cruise Automation into its R&D efforts in a widely publicized buyout. And, not to be left behind, Ford acquired a majority stake in Argo AI, also for $1 billion.

That leaves us with a question: Do even the well-funded startups mentioned earlier stand a chance of either usurping market dominance from corporate incumbents or at least joining their ranks? Perhaps.

The reason why so much investor cash is going to these companies is because the market opportunity presented by autonomous vehicle technology is almost comically enormous. It’s not just a matter of the car market itself — projected to be over 80 million car sales globally in 2018 alone — but how we’ll spend all the time and mental bandwidth freed up by letting computers take the wheel. It’s no wonder that so many companies, and their backers, want even a tiny piece of that pie.

Source link

Transaction Fees Temporarily Favor Bitcoin but Bigger Picture Looks DifferentBitcoin Fees Temporarily Less Than BCH, Can This Continue?

Transaction fees are a hot topic of debate in the cryptocurrency world. There are numerous discussions regarding Bitcoin versus Bitcoin Cash fees in this regard. It now seems the average Bitcoin transaction cost is lower than that of BCH. As is usually the case, such a temporary snapshot can look very different in a few hours.

The Bitcoin Fees in Perspective

When directly comparing BTC with BCH fees, it seems there is no real contest. The cost of a Bitcoin Cash transaction has been far lower compared to the world’s leading cryptocurrency. In fact, the BHC costs have been relatively flat for quite some time now. Bitcoin’s fees, on the other hand, are erratic as they have always been.

However, temporary snapshots often tell a different story. The Tweet by Ven Verret shows the BTC fees are lower than BCH in a six-hour period. An interesting development, albeit one that only tells a small part of the story. The screenshot itself shows these fees for Bitcoin have spiked just an hour or two prior. Why that sudden increase took place, remains a bit unclear at this time.

This doesn’t mean Bitcoin has suddenly become cheaper to use. When timing transactions correctly, that may be the case temporarily. Additionally, users with patience can still send transactions at 1 Satoshi per byte. It may take several hours to confirm, but it is another interesting option worth exploring.  When looking at the bigger picture, however, BCH remains the cheaper option by quite a margin.

The Debate Continues

As is usually the case when trends like these are pointed out, the public backlash is palpable. Bitcoin Cash supporters do not take kindly to this development. However, they also point out how their average TX fee is 0.24 cents or lower. That is quite cheap, although others will argue this point. The overall use of the BCH network compared to Bitcoin is very different.

The bigger question is how this situation will evolve. More specifically, there is a growing need for faster and cheaper Bitcoin transactions. The Lightning Network may effectively provide that functionality. Thanks to the growth in payment channels, it seems this logical development may occur pretty soon. Even so, the technology is still in beta testing, and an “official” launch has not yet been announced.

Developments like these are interesting to keep an eye on. However, keeping the bigger picture in mind needs to be a top priority first and foremost. That picture clearly shows Bitcoin has the highest market cap and Bitcoin Cash remains the altcoin. It also shows Bitcoin has higher fees in general, and that situation may not change soon. As such, the current status quo between BC and BCH will not swing in favor of either currency.

Source link

First Bitcoin Mining Conference Hashes Over the High Cost of Energy

Bitcoin mining uses as much electricity as Ireland, and by the end of 2018, the Bitcoin network will be using as much energy as Austria, according to a new report by Alex de Vries of the Experience Center of PwC in the Netherlands.

Billed as the first serious, peer-reviewed study of energy use in crypto mining, the report has set off alarm bells, adding to current concerns about the impact of future mining energy consumption on environmental issues like climate change.

At the first-ever conference for crypto mining, held on May 17, 2018, in New York City, an expert panel hashed out the implications of rapidly growing energy consumption among miners worldwide.

Amber D. Scott, CEO of Outlier Solutions, moderated a panel of experts that spoke about the energy issue as part of a discussion on the topic of proof of work (PoW) vs. proof of stake (PoS).

Scott told Bitcoin Magazine that there was a lot of discussion at the conference about the new energy report in part due to the attention it’s currently receiving in the press.

“This is an area where there is a spectacular amount of FUD [fear, uncertainty and doubt],” she noted. “This is in part because it’s a nuanced issue that can’t be summed up in simple statements about net energy consumption.

“I think that part of the reason that Bitcoin has been a ‘target’ in this respect is that there are relatively straightforward calculations in terms of power consumption in conjunction with the underlying value not being well understood or widely accepted. For instance, few people question the utility costs of a bank or ATM, and the energy consumption cannot be calculated in a straightforward way,” she added.

Scott Howard, CEO and co-founder of Toronto-based ePIC Blockchain Technologies, told the conference audience what many there were already saying: that the energy consumption issue is “fake news” and has been oversimplified to suit Bitcoin opponents.

High Energy and Environmental Costs of Traditional Fiat and Banking

Traditional banking and fiat creation have their own high energy costs, noted a number of conference panelists, including Alex Petrov, CIO of Bitfury; Jan Čapek, CEO of Slush Pool; Samson Mow, CSO of Blockstream; and Howard.

They talked about the high infrastructure costs, in terms of both energy and the environment, associated with traditional banking systems, including ATMs, security and servers.

“There are 3.6 million ATMs deployed in the U.S. Each of them are using 7 to 800 watts just in standby mode,” said Petrov. “This alone generates huge numbers of electricity usage, slightly higher than the Bitcoin network.

“If you add … internal banking systems, CTVs, communicating with other banks, additional protection … you get higher costs than the cost of Bitcoin,” he added.

“Gold mining consumes enormous energy. Portions of the electricity crypto mining consumes come from power generation and distribution originally built to supply ore and precious metals extraction or processing,” said Howard.

“Where traditional energy consumers like gold mines really fall down is all the other negative externalities of the ‘wealth’ they create. Gold mining during and after is one of the most toxic and destructive operations on the planet.”

Howard talked about the use of abandoned industrial sites for Bitcoin mining, like pulp and paper mills that had been closed due to dwindling forestry supplies and increased concerns about energy use and toxic waste pollution.

“Lots of crypto mines are sitting in old industrial sites with a 100-megawatt transformer sitting next to them,” he said.

By way of example, in the Q&A session that followed the panel, one conference participant talked about his mining operation in British Columbia in a partly abandoned manufacturing site where water is pumped through his mine for cooling and then recycled into a warm-water fish farm.

Mining Doesn’t Create New Hydro Infrastructure: Power Is Produced Even If Unused

After the panel, Howard reiterated to Bitcoin Magazine what other conference panelists had said: that large energy mega-projects like hydro dams produce electricity whether they’re used or not.

“To my knowledge, no one has built out any net-new power generation to supply a crypto mine. Power generation stations are major pieces of infrastructure that take years to put in place and in the range of a decade to pay off. Most power generation is done, certainly in the western world, by publicly owned and/or regulated utilities. Those stations run 24/7, 365 for decades. The energy goes onto the grid no matter what and, until we figure out storage, is a perishable commodity.

“Crypto mining takes full advantage of this, typically sucking up energy at very low prices. The prices are low because the energy can’t find more productive use, often taking over abandoned industrial sites far away from urban centers,” he added.

Čapek told the conference that “China has been overinvesting in hydro power and has large amounts of power that is not being used” as a result of overbuilding hydro dams in anticipation of industrial needs (principally aluminum manufacturing) that never materialized.

“Cheap hydro power has helped China dominate the world’s Bitcoin mining business, and this power would be generated with or without Bitcoin mining,” said Čapek, who recommended a recently released study on the energy costs of mining in China.

Čapek said a rough estimate of global Bitcoin mining energy use is 25 terawatt hours per year, only a fraction of the amount of energy used in manufacturing aluminum globally.

PoW vs. PoS

The panelists mostly agreed that proof of work’s higher energy use is necessary for real security and that proof of stake is not likely to happen soon. They noted that PoS would not give the Bitcoin network the security it needs, even if it was more energy efficient.

“The more energy that’s expended, the more security you have,” noted Mow. “The system has to be nuclear-proof.”

Petrov defended the possibility of a hybrid model of both PoW and PoS, saying, “Proof of stake may work for some specific purposes, not necessarily in the financial area but inside private blockchains.”

Mow responded that using both PoS and PoW “brings out the worst of both worlds,” while Howard noted that Dash is using a hybrid model with some success, though it’s very expensive to use.

Howard concluded: “It [Bitcoin mining] is a positive economic activity, typically in places where it is needed, as well as meaningful revenue to the utilities which are major employers and usually profit centers for governments. The ‘waste of energy’ argument, like most establishment arguments against blockchains, does not pass the test of science.”

Source link

Cryptocurrencies like Bitcoin Less Wasteful Than Fiat Money

Get exclusive analysis and cryptocurrency insights on for just $39 per month.

Earlier this week, I wrote on social media that it costs significantly less energy to produce cryptocurrencies like bitcoin and Ethereum. The responses were, “that’s not true, once fiat money is created, no additional energy is required.”

Perhaps a better way to phrase the statement was to replace energy with resources, as fiat currencies do require significantly more resources than cryptocurrencies.


Currently, the vast majority of people are comparing bitcoin’s electricity consumption to the production of paper money at central banks like the Federal Reserve, dismissing manual labor, energy, and electricity required to distribute and transfer money.

Fiat requires commercial banks, central banks, ATMs, armored cars, hundreds of thousands of employees, among other things to work. The central bank, in this case the FED, does not magically distribute the US dollar to every person in the country at their doorstep. The FED distributes its US dollar to banks and its friends, who then distribute money with the hopes of trickling down the US dollar to the bottom of the economy.

Cash requires a truly massive infrastructure to function. In the US alone, there are more than 6,000 banks that process cash transactions. Most people no longer uses cash in its physical form to transact. They rely on third party service providers and banks like JPMorgan, Visa, and MasterCard to process payments. The amount of resources and energy these companies and their hundreds of thousands of employees consume should be included in the comparison between the energy consumption of bitcoin against banks.

Bitcoin is a peer-to-peer financial network and due its decentralized nature, no third party is required to transact. Alice can send Bob $100 by broadcasting the transaction to the mempool, which is than picked up by miners to process. In return, miners are incentivized by receiving bitcoin and transaction fees included in the block.

Hence, while it may be accurate to claim it requires more electricity to mine cryptocurrency, it is false to claim that to create or generate bitcoin, more resources are required than to create cash or paper money, as the majority of the energy used by the miners is attributable to confirming and validating transactions, which most of the banks do globally.


bitcoin mining farm
Bitcoin Mining Farm

John Lilic, member at Ethereum blockchain development studio ConsenSys, stated that the cost per transaction is significantly higher with crypto and that is undoubtedly correct. Major banks like JPMorgan processes trillions of dollars on a daily basis. Lilic said that in the long-term, blockchain projects will have to find better ways to process transactions and information more efficiently.

“The per unit cost of each tx is significantly higher with crypto. Data centres banks use are much more efficient than mining operations & legacy systems process orders of magnitude more tx’s per day than crypto. We need specificity around the energy issue, not conjecture. The real question is whether the gross energy inefficiency costs in crypto is worth the benefits like custody over assets. My contention is Yes! It is worth it but only if our industry prioritizes & continues to work towards energy efficiency gains like Proof of Stake.”

As cryptocurrencies and blockchain technology mature, they will experiment with more efficient methods of consensus algorithms and mining methods that may decrease the energy output of cryptocurrencies in the long-term.

Images from Shutterstock

Follow us on Telegram.

Source link

In-House vs. Outsourcing Software Development

It will certainly be a thought process you will go through again and again as you ponder whether to utilise in-house or outsourced software development. There are pros and cons to both and your individual circumstances will clearly define your exacting requirements. Here we will define those pros and cons so you can make an informed decision on what you will do for your business.

In-House Development

Utilising and building an in-house team may appear the most strategically sound direction for your business to go. Your staff will be under your direct control and working towards the same long-term goals. However, it is not without challenges and can leave you exposed.

Advantages of In-House App Development

  1. A Bourgeois Interest in Your Organisations Goals. Having an in-house team will ensure you are all collaboratively working to the same goals with the same drive to achieve them.
  1. Company Standards Adhered to. You will have no doubt detailed in your business plan a strict set of company standards to adhere to legal and commercial aspects. In-house you are assured your staff will follow the standards. Coding standards will also be aligned and you are in total control of the convention used and the future maintenance requirements.
  1. Cultural Alignment. Building rapport and a positive culture in your workplace is far easier with in-house staff. You can undertake team building activities or target annual reporting with behaviours. By having control over your culture you will have a team who can all work together.
  1. Long Term Collaboration. By having your own software development team you can assure your project success by having the same people work it from the start. They will know the product inside out and understand how to fix things quickly.
  1. Expeditious Reaction. An immediate reaction to any requests or issues will be possible with in-house teams. They will only be focussed on your software.

What are the Disadvantages?

  1. Recruitment Costs. It can be very expensive to recruit staff through a recruitment agency, typically 5-10% of the annual salary. Even taking recruitment into your own hands will be expensive to advertise the openings in the right places, it can range from £250-£999 to place an advert on a well known jobs board.
  1. Set-up Costs. Procuring hardware and software for your start-up will be your biggest expenditure. To keep up with competitors you will need to provide all of the equipment and licenses for your software engineers to operate.
  1. Operating Costs. Once you have taken the fiscal hit of procuring your equipment, you then have to pay to run and maintain it!
  1. Set-up Time. To set your software team up will take a lot of time, lead times for equipment and running recruitment campaigns can take several weeks or even months.
  1. Availability of IT Expertise. It is highly unlikely you will be able to recruit highly experienced and specialised experts from the outset, they will be on the higher end of the salary scale and may not want to leave current employers for a start-up.

Outsourced Software Development

An outsourced team provides many benefits to your start up. Many large, global businesses started out in a garage outsourcing their embryonic ideas to build up. Whilst there are some risks to outsourcing, they are typically outweighed by the pros.

Outsourcing Advantages

  1. Reduced Costs. By utilising an outsourced team you will only pay for the services you require and they are defined from the start in your contract.
  1. Defined Contracting Periods. By clearly defining how long you require the services you are able to forecast costs.
  1. Scalability. An outsource team will be able to undertake many different tasks and services for you. This enables you to expand your business far quicker by exploiting new technologies or opportunities as they emerge.
  1. Access to Expertise. By definition, an outsource team will only employ experts in their field. You are able to access this expertise through your contract and get advice on improving efficiency and cost-cutting.
  1. Adaptability and Resilience. In the same sense as scalability, your outsourced team are able to adapt to your changing requirements by bringing in staff from other departments of their organisation to work on your project. Giving you a flexibility not afforded in-house.

Outsourcing Disadvantages

  1. Conflicting Priorities. You will almost certainly not be the only client the outsourcer has, so there may be conflicting priorities depending on who shouts the loudest or who pays more. This can affect your business as you may not get the instant response you require.
  1. IPR Risks. Giving your data, sensitive information and trust to an outsourcer can be risky. It only takes one breach for you to have your business at risk from competitors or hackers.
  1. Logistical and Geographical Issues. Your development team may be located on the other side of the world which clearly may not be compatible when arranging conferences, meetings or visits.
  1. Control of Quality and Process. You will not have control of the quality and processes used in your software development until the product is delivered to you. Then you may find a bit of rework to align to your goals and standards.
  1. Cultural and Organisational Differences. With the ability to work round the globe it is possible to use an outsourcer from almost anywhere. Whilst a brilliant opportunity it can brings problems if you do not research the cultural and organisational differences. This can be simple things like shorter working weeks of extended religious holidays but in extreme cases can be illegal activity or discrimination.

The Bottom Line

In-house may give you control but it can be costly and very time consuming. It can be done but only with a lot of financial backing and patience from customers.

Outsourcing has grown over 2 decades and continues to do so. The pros are clear to see, the cons, whilst they need to be acknowledged can be managed by careful planning.

Source link

Blockchain Must Adapt to Build Trust in the Internet of Things

Mic Bowman is a principal engineer at Intel and a member of CoinDesk’s advisory board. Camille Morhardt is the director of IoT strategy at Intel. 

The following article originally appeared in Consensus Magazine, distributed exclusively to attendees of CoinDesk’s Consensus 2018 event.

The edge is messy.

And the edge, where billions of interacting devices that will make up the Internet of Things will reside, is where IoT data is generated and acted upon.

There are often no secure physical perimeters where the raw sensing of the physical world takes place: on rooftops and space stations, inside mines and aircraft engines, on container ships and solar panels. Even edge counterparts that aggregate, filter, normalize, and increasingly interpret data, or send it to a cloud for additional analysis, are often mobile, have intermittent connectivity, and are subject to shock, vibration, or extreme temperatures.

As Things increase their connectivity and intelligence, so too will our demand for them to autonomously form networks, exchange information, and coordinate action on our behalves.

When we order an article of clothing online, for example, we indirectly call on, among others, a fashion designer, raw goods suppliers, logistics companies, customs, a distributor, an importer, a buyer, an inventory management system, a customer management system, a bank, a web management system for product placement and pricing, a retailer, and a last-mile delivery driver.

Were each of these participants able to gain near real-time insight into our purchase and its progression from factory to front door, they might be able to collaborate to optimize multiple independent systems near real-time to get me the product as fast and in as good condition as possible – especially if there are unforeseen setbacks en route – a flat tire! – while preparing for their next order.

Yet the formation of these networks is rife with problems. In the best case, information collected, shared, and acted upon is inconsistent in quality and availability. In the worst case, it provides a completely new attack vector for malicious participants. When Things plan and act on our behalves, we want assurance that the data they utilize to make decisions is trustworthy.

Ensuring that information is trustworthy is hard enough when a central authority orchestrates device configuration, data collection and cleaning, and data dissemination.  However, distributed networks can’t rely upon a central authority.

Traditional means to assert and verify participant identity and integrity fail, because participating Things are made by different manufacturers, run different operating systems, communicate with different protocols, and act on behalf of different owners who have different motives. The answer may well lie in the emerging technology that has become known as “blockchain.”

Blockchain – or distributed ledger technologies in general – offers hope for expressing and establishing shared trust in information created and exchanged by Things: the immutable log of events that is the blockchain provides a means to establish authoritatively the provenance of information; to record and enforce policies for accessing the information; and to act on the information autonomously through “smart contracts.”

However, while there is tremendous promise, blockchain technologies must evolve substantially to meet IoT’s unique demands. The unique characteristics of IoT applications impose both technical and economic requirements that lead us to conclude that IoT applications must be situated within an economic, legal and regulatory context that extends beyond the blockchain. In particular, whereas traditional blockchain applications ascribe all authority to the blockchain, we believe IoT applications must achieve a balance of authority.

Technology requirements

Establishing trust in the information shared among Things creates new requirements for blockchain technologies. Generally, blockchain technologies operate as an authority for well-defined, deterministic systems. However, information created by Things sits outside the blockchain and is notoriously ambiguous and non-deterministic. Providing information assurance for qualitative data imposes new requirements on the technology.

Requirement 1: Identity and reputation of participants is central to trust and must be exposed.

Public blockchains like Bitcoin typically provide a history of the transactions on assets while anonymizing (or at least attempting to hide) the identity of those performing the transactions. For IoT applications, however, information becomes more complex than simple ownership of an asset.  In particular, most information generated at the edge is strongly qualitative; and once information becomes qualitative, its provenance – including the identity and reputation of the source – is critical. For example, a blockchain can accurately record the transfer of access rights to a piece of information that asserts that a container was shipped across town. However, a blockchain is unable to assert the authenticity of the GPS readings captured in the shipping record.

Purists from the cryptocurrency world will argue that a “permissioned blockchain” is an oxymoron; however, some form of identity verification is required for participants who join the network so they can trust the information the Thing contributes to the collective. This demand has led to the formation of private, permissioned, closed, and enterprise blockchains – all variants on the theme of restricted participation in the distributed network. There is another possibility that Things may be identified or otherwise certified to contribute information to an otherwise public blockchain – some sort of hybrid model that attempts to validate input but not restrict inputters. Other possible solutions involve the use of anonymous credentials and verifiable claims.

Requirement 2: Controlled access to information is critical.

Typically, blockchain transactions are transparent. The introduction of smart contracts that codify and execute detailed agreements between participants complicates this notion. Businesses don’t like to share confidential data with competitors. Smart contracts will be powerful tools in IoT, particularly in supply chains that include third party logistics companies. It’s quite common for disputes to arise at handoff points where there is transfer of custody of an asset. The ability to prove that the temperature of the container remained within contract parameters should allow immediate trigger of payment. Or conversely, proof that the good spoiled under party eight’s custody in a twelve-party supply chain that all participants can view will quickly resolve finger pointing.  And this proof must be constructed without revealing additional confidential information. For example, if an organization is collecting bids on produce that was in that container, the organization may not want all bidders to see every bid or to know the final sale price. In general, the information shared through transactions is subject to a potentially complex set of access policies.

Requirement 3: Efficiency matters.

Another core principle of blockchain is redundant compute and storage: every participant processes all transactions and maintains the ledger, creating an ever-growing demand for storage across the network. In IoT, where lightweight nodes at the edge frequently have extremely limited storage and compute power (because their primary purpose is to sense raw data as economically as possible), IoT blockchains will likely need to recognize the variety of nodes in the network and their relative capabilities. The blockchain itself may need to orchestrate which clients act as lightweight nodes, and which act as validators. Further, we are likely to see an increasing variety of consensus mechanisms that do not require massive quantities of computing power or specialized hardware, and are thus easier to scale or run on existing deployed equipment.  (Note, also, that while redundancy is often viewed as a feature for blockchain integrity, one that increases the cost to a malicious actor that seeks to break network consensus and introduce fraudulent transactions, it also simultaneously expands confidentiality risks. Ledger replication offers a wide surface area for attackers seeking access to individual nodes’ sensitive data.)

Requirement 4: Connectivity is intermittent; action must be taken when disconnected.

Intermittent connectivity seems paradoxical to the Internet of Things. As Jacob Morgan defined IoT in Forbes in 2014, “Simply put, this is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other).” The IoT community spent a lot of time espousing pervasive connectivity and a reduction in transmission and storage costs; however we now confidently make tradeoffs between connectivity and battery life, connectivity and transmission cost, connectivity and infrastructure cost. There are many, many edge nodes which by design receive or send data only intermittently and in small quantities. In essence, the same forces that drive autonomous interaction to the edge also require blockchains to accommodate connectivity constraints.

Requirement 5: Actions must be reversible.

To this point, the requirements we’ve discussed have been rather peripheral to the core of blockchain technology, focusing on performance and deployment characteristics; this one, however, represents a fundamental shift in one of the central tenets of the technology. Specifically, blockchain technology is founded on the principle of immutability; once something is committed to the log it never changes. This principle is particularly appropriate for the preservation of a record of unambiguous and deterministic events (such as transactions that represent the transfer of ownership of assets). However, data from the edge is often messy.

Precision and accuracy are limited by the physical capabilities of the Thing. And information generated at the edge is subject to a variety of malicious attacks that are difficult to detect. The messiness of data created (and consumed) by Things leads to a level of ambiguity and non-determinism that conflicts with blockchain technologies. Consider, for example, a smart contract that adjusts the target speed of vehicles on a road based on measured traffic flow. Weather issues that affect the accuracy of the flow sensor might trigger adjustments in the target speed that are unintended. A more troublesome example might occur when automatic payments are triggered when a shipping container arrives at a facility. A faulty RFID reader could report the existence of a container that has not actually arrived triggering an inappropriate transfer of funds.

Often, some form of external recourse can audit and prescribe corrective transactions that address these problems (though this implies the existence of an external authority). However, issues arise where the information itself is problematic. For example, personal information might leak into a transaction; the effect of GDPR and other privacy regulations may require that information be removed from the record. This problem is not unique to IoT applications though we expect it to be more common in them.

Economic Requirements

Beyond the technical requirements are simple economic barriers to blockchain adoption in IoT. Enterprises are familiar with centralized systems and in traditional, linear supply chains, they work well. When there is a strong purchaser at one end of a supply chain, there is every reason for that entity to simply set up a distributed database (that it manages centrally) and require all vendors participating in its supply chain to enter their data into it.

Until we enter the realm of multiple overlapping ecosystems and complex non-linear, dynamic supply chains (think: distributed manufacturing with over a dozen contributors to any given Thing printed, each with unique IP, equipment, and certifications), it is difficult to find an economically compelling use for truly decentralized ledgers.

However, the competitive environment in which these incumbents operate in is rapidly changing, with 3D-printing enabling distributed manufacturing, and barriers to entry around machine learning and other fast-developing technologies lowering. To compete, enterprises may be forced to adopt more open systems. The IoT industry is inevitably expanding into more complex ecosystems. As a result, we expect compelling use cases for blockchain will become more apparent.

Herein lies a conundrum. Single strong purchasers orchestrate ecosystems around a supply chain because they accrue revenue by doing so. Distributed collaboration results in distributed value, so there is little incentive for any single, incumbent entity to set up the infrastructure to distribute orchestration. Blockchains are uniquely suited to micro-transactions, so scale may help solve this problem. The IoT community has seen a few subscription models and nonprofit models. However, until there emerges a clear, repeatable, compelling business model, adoption of blockchains for IoT will be slow.

Over the next couple of years we will likely see an increasing number of pilots and small scale deployments using the technology in sub-optimal usages, e.g. standard supply chains with a dozen or so participants to improve speed of asset tracking or provenance and reduction of disputes through audit – all important advances in IoT. In these early trials, industry and ecosystem leaders will seek to prove cost savings or incremental revenue.

We will then witness the evolution of standards that allow for cross-organizational device identity and configuration, with early methods for partitioning workloads across the variety of IoT devices, and protecting data or its meta-inputs via linked trusted execution engines or retention of encrypted states as data moves across edge, fog, and cloud nodes. Devices will autonomously form communities, exchange information, and present us with options for action based on their interactions.

Finally, we will likely see commensuration of data generated at the edge – not just across autonomous Things or organizations, but across autonomous ecosystems. At this point the blockchain will be more efficient than centralized systems at managing the complexities of non-linear supply chains, managing identity, provenance, shared data sets, and running smart contracts.

While we will be trusting machines to make some decisions and take some actions on our behalves, businesses in IoT will always want to retain the ability to revoke or reverse the actions taken by a smart contract, since humans are notoriously bad at contingency planning or future prediction, and the equipment that will be acting on our behalves will also often be responsible for keeping us safe.


We often talk about a blockchain as a replacement for a trusted third party for interactions within a community; that is, the community ascribes to the blockchain ultimate authority about “truth.” For applications built around a network of Things, however, the blockchain must be situated within a much larger context that incorporates institutional relationships, legal requirements, and regulatory control.

There is a very real danger for those deploying blockchain-based solutions for IoT to believe that the tamper-proof nature of the blockchain provides assurances about the integrity and trustworthiness of information (and about actions driven by that information).

A more realistic view is that the role of the blockchain transitions from a source of “shared truth” about the state of a system to a log of “decisions and actions” that might need to be adjusted in the future.

Network visualization via Shutterstock

The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.

Source link

Here is where CEOs of heavily funded startups went to school – TechCrunch

CEOs of funded startups tend to be a well-educated bunch, at least when it comes to university degrees.

Yes, it’s true college dropouts like Mark Zuckerberg and Bill Gates can still do well. But Crunchbase data shows that most startup chief executives have an advanced degree, commonly from a well-known and prestigious university.

Earlier this month, Crunchbase News looked at U.S. universities with strong track records for graduating future CEOs of funded companies. This unearthed some findings that, while interesting, were not especially surprising. Stanford and Harvard topped the list, and graduates of top-ranked business schools were particularly well-represented.

In this next installment of our CEO series, we narrowed the data set. Specifically, we looked at CEOs of U.S. companies funded in the past three years that have raised at least $100 million in total venture financing. Our intent was to see whether educational backgrounds of unicorn and near-unicorn leaders differ markedly from the broad startup CEO population.

Sort of, but not really

Here’s the broad takeaway of our analysis: Most CEOs of well-funded startups do have degrees from prestigious universities, and there are a lot of Harvard and Stanford grads. However, chief executives of the companies in our current data set are, educationally speaking, a pretty diverse bunch with degrees from multiple continents and all regions of the U.S.

In total, our data set includes 193 private U.S. companies that raised $100 million or more and closed a VC round in the past three years. In the chart below, we look at the universities most commonly attended by their CEOs:1

The rankings aren’t hugely different from the broader population of funded U.S. startups. In that data set, we also found Harvard and Stanford vying for the top slots, followed mostly by Ivy League schools and major research universities.

For heavily funded startups, we also found a high proportion of business school degrees. All of the University of Pennsylvania alum on the list attended its Wharton School of Business. More than half of Harvard-affiliated grads attended its business school. MBAs were a popular credential among other schools on the list that offer the degree.

Where the most heavily funded startup CEOs studied

When it comes to the most heavily funded startups, the degree mix gets quirkier. That makes sense, given that we looked at just 20 companies.

In the chart below, we look at alumni affiliations for CEOs of these companies, all of which have raised hundreds of millions or billions in venture and growth financing:

One surprise finding from the U.S. startup data set was the prevalence of Canadian university grads. Three CEOs on the list are alums of the University of Waterloo . Others attended multiple well-known universities. The list also offers fresh proof that it’s not necessary to graduate from college to raise billions. WeWork CEO Adam Neumann just finished his degree last year, 15 years after he started. That didn’t stop the co-working giant from securing more than $7 billion in venture and growth financing.

  1. Several CEOs attended more than one university on the list.

Source link

CommerceDNA wins the TechCrunch Hackathon at VivaTech – TechCrunch

It’s been a long night at VivaTech. The building hosted a very special competition — the very first TechCrunch Hackathon in Paris.

Hundreds of engineers and designers got together to come up with something cool, something neat, something awesome. The only condition was that they only had 24 hours to work on their projects. Some of them were participating in our event for the first time, while others were regulars. Some of them slept on the floor in a corner, while others drank too much Red Bull.

We could all feel the excitement in the air when the 64 teams took the stage to present a one-minute demo to impress fellow coders and our judges. But only one team could take home the grand prize and €5,000. So, without further ado, meet the TechCrunch Hackathon winner.

Winner: CommerceDNA

Runner-Up #1: AID

Runner-Up #2: EV Range Meter


Nicolas Bacca, CTO, Ledger
Nicolas worked on card systems for 5 years at Oberthur, a leader in embedded digital security, ultimately as R&D Solution Architect. He left Oberthur to launch his company, Ubinity, which was developing smartcard operating systems.

He finally co-founded BT Chip to develop an open standard, secure element based hardware wallet which eventually became the first version of the Ledger wallet.

Charles Gorintin, co-founder & CTO, Alan
Charles Gorintin is a French data science and engineering leader. He is a cofounder and CTO of Alan. Alan’s mission is to make it easy for people to be in great health.

Prior to co-founding Alan, Charles Gorintin was a data science leader at fast-growing social networks, Facebook, Instagram, and Twitter, where he worked on anti-fraud, growth, and social psychology.

Gorintin holds a Master’s degree in Mathematics and Computer Science from Ecole des Ponts ParisTech, a Master’s degree in Machine Learning from ENS Paris-Saclay, and a Masters of Financial Engineering from UC Berkeley – Haas School of Business.

Samantha Jérusalmy, Partner, Elaia Partners
Samantha joined Elaia Partners in 2008. She began her career as a consultant at Eurogroup, a consulting firm specialized in organisation and strategy, within the Bank and Finance division. She then joined Clipperton Finance, a corporate finance firm dedicated to high-tech growth companies, before moving to Elaia Partners in 2008. She became an Investment Manager in 2011 then a Partner in 2014.

Laure Némée, CTO, Leetchi
Laure has spent her career in software development in various startups since 2000 after an engineer’s degree in computer science. She joined Leetchi at the very beginning in 2010 and has been Leetchi Group CTO since. She now works mainly on MANGOPAY, the payment service for sharing economy sites that was created by Leetchi.

Benjamin Netter, CTO, Lendix
Benjamin is the CTO of Lendix, the leading SME lending platform in continental Europe. Learning to code at 8, he has been since then experimenting ways to rethink fashion, travel or finance using technology. In 2009, in parallel with his studies at EPITECH, he created one of the first French applications on Facebook (Questions entre amis), which was used by more than half a million users. In 2011, he won the Foursquare Global Hackathon by reinventing the travel guide with Tripovore. In 2014, he launched Somewhere, an Instagram travel experiment acclaimed by the press. He is today reinventing with Lendix the way European companies get faster and simpler financing.

And finally here were our hackmasters that guided our hackers to success:

Emily Atkinson, Software Engineer / MD, DevelopHer UK
Emily is a Software Engineer at Condé Nast Britain, and co-founder & Managing Director of women in tech network DevelopHer UK. Her technical role involves back-end services, infrastructure ops and tooling, site reliability and back-end product. Entering tech as an MSc Computer Science grad, she spent six years at online print startup MOO – working across the platform, including mobile web and product. As an advocate for diversity and inclusion in STEM & digital in 2016 Atkinson launched DevelopHer, a volunteer-run non-profit community aimed at increasing diversity in tech by empowering members to develop their career and skills through events, workshops, networking and mentoring.

Romain Dillet, Senior Writer, TechCrunch
Romain attended EMLYON Business School, a leading French business school specialized in entrepreneurship. He covers many things from mobile apps with great design to fashion, Apple, AI and complex tech achievements. He also speaks at major tech conferences. He likes pop culture more than anything in the world.

Source link