Companies offering services based on connected devices will increasingly have access to significant amounts of highly granular data about consumers and their connected devices. This trend is heightening privacy-related concerns about the way that such data might be used and the potential for consumers to be harmed. Policy makers and business organizations that have an interest in the long-term viability of the IoT market also need to ensure that consumers are not ruthlessly exploited under apparently beneficial situations. Through this perspective, concepts of trust and stewardship related to the use of private data can be developed into new and appealing value propositions.
By harnessing data science to its full potential, top-ranking decision makers in all industries, not only make better-informed decisions but make them with clearer predictions of the future. With that advantage on their side, they are able to stabilize businesses that have not always had a clear vision and save businesses that are on the brink of collapse. Once goals have been established, data scientists can work their magic and theorize how to fix it. Data science alone is not an advantage for decision-making, data science combined with good leadership is.
What do I mean by true real-time data? It is data that has just been generated and never been stored. Because once data has been stored, no matter for how long, it is no longer real-time. Can you imagine making vital business decisions based on three-month-old insights? How about a week old? Or a day old? Minutes-old data can be irrelevant for the real-time decisions that matter most to your business, yet many people don’t understand the difference between real-time analytics with real-time data and real-time analytics with stale data.
Smart Cities will be built on a combination of infrastructure from players, including telecoms operators, mobility operators, public safety agencies, and utilities, as well as infrastructure from cities themselves, cities that will run different legacy and platform-based IoT deployments. It is widely acknowledged that no single IoT platform will dominate the market. Here we see the increasing need for IoT platforms to exchange data to address the requirements of cross-application use cases. This will only be achieved by IoT platforms designed and managed for interoperability.
Big data, Internet of Things (IoT) applications and self-service portals will make it easier than ever for businesses to anticipate their customer's needs, alleviating a load of customer support agents while providing consumers an impressively instantaneous response to their queries. Customers have changed, so technology is molding customer service into a more self-servicing, instantaneous and data-driven platform where consumers are more satisfied, while also reducing the load on a customer support team. Increasing sophistication in technology and big data makes it easier than ever for businesses to address customer support issues before they even occur.
The smart city concept is now firmly on the operational agenda of government officials and private sector solution providers. The evolution towards grounded solutions means that adopters and solution providers require tangible strategies along with workable frameworks and planning tools to initiate their smart city initiatives. Look at how the emerging smart city industry acts as a host to multiple smart city reference architecture initiatives in parallel with multiple check-list criteria that aim to rank cities on smart city implementation roadmaps. Let’s focus on two structural features of city planning and management to illustrate the real-world challenges that city authorities will have to overcome. The first deals with differing economic profiles that characterize individual cities.
Cyber Risk is recognized as a major threat by both insurers and their clients. Is there any proven way to manage this risk efficiently? Cyber insurance clients will have to beef up their cyber risk strategy, if they have one, and make sure that they are constantly up to date with the latest software, firmware, and hardware fixes, if possible. They must also train up employees to understand cyber risk. There is potential for insurtechs and development houses who can consult small and medium businesses on this; not everyone can afford to hire a CISO.Insurers will need to develop expertise in cyber insurance and start gathering their own cyber risk data. Insurers must also ensure that their own house is in order, as they are a good target for data breaches.
Data is slowly replacing experience and traditions in the way companies do business. It has already proven its value in different verticals, including finance, healthcare, and of course, retail. The first obstacle is to define the scope of the Big Data project. What are the most critical questions the company needs to answer? What data sources should they analyze? Are these already available? Is the data clean and reliable? While in the initial implementation phase Big Data-related modifications can affect usual workflows and slow down business, the opposite happens after they are set in place.
What do we expect from a Senior Data Scientist? Senior Data Scientists understand that Software/ Machine Learning has a lifecycle and so spend a lot of time thinking about that. They understand that ‘data’ always have flaws. These flaws can be data generating processes, biases in data. They understand the ‘soft’ side of technical decision making, focus on impact and value, and care about ethics. At the very least Senior Data Scientists should read some of the code of ethics in Data Science and have views on these. Ideally, you should have your own code of ethics, and maybe enforce those on yourself.
The technical and commercial success of cloud computing technology made it feasible to evolve the most demanding information and communication technology (ICT) infrastructures, such as communication networks, from specialized hardware and software to new software paradigms, referred to as ‘cloud-native’. Internet of Things (IoT) virtualization – IoT built on cloud-native principles – is to IoT platforms what Network Function Virtualization (NFV) is to communication networks.
We have covered the basic terms and definitions for data types and structure on my previous post, let’s dive into the creative and most time consuming side of data science — cleaning and feature engineering. What are some of the basic strategies that data scientists use to clean their data AND improve the amount of information they get from it?The type of cleaning and engineering strategies used usually depend on the business problem and type of target variable, since this will influence the algorithm and data preparation requirements.
With almost no industry untouched by blockchain-mania, what opportunities does the technology hold for the mobile industry? When we consider the application of blockchain to the telecoms industry, it is difficult to see large-scale disruption of existing operating models. This is because the industry has long-established supply chains and, in 3GPP, a well-functioning and transparent institution that fosters innovation and technology standardization. It is also an industry with a several decades of delivering cost reductions from scale economies. This limits the scope for blockchain to disrupt the established order
Whether it’s smart wearables, connected cars and machines, consumer electronics or smart city deployments, there is no denying the internet of things is growing at an increasingly rapid rate. However, yielding this opportunity in the correct way to create true value for enterprise and consumers while ensuring a safe and secure experience does not come without challenges. Addressing these is not something that device providers, service providers or application developers can ignore in their IoT strategies — but there is a solution.
If you are getting started in your data science journey and don’t come from a technical background, then you definitely understand the struggle of keeping up with the terminology of data pre-processing. This was obviously a concern, considering that Data Scientists spend 60% of the time cleaning and organizing data! This is the FIRST article, so we will only focus on key terms. Make sure to follow me, in order to read the next posts more focused on feature engineering, model selection, etc. Keep in mind that some of these terms differ depending on the language or platform you are using. But, I hope it gives you a nice overview.
How many people in the world do you think fit this bill? And what number of those people has the soft skills to be customer facing, client facing, and management facing, yet analytical, creative, and intelligent. We are asking the wrong things from Data Scientists and we are looking in the wrong places. There is no possible way that a Data Scientist will use all these tools at one company, and even less likely that someone knows all these languages. Data Science is more about the intelligent use of programming, rather than programming itself.
Artificial intelligence (AI) and machine learning have been around for years but more widely used in the business to consumer (B2C) space. f you’re in the business of lead generation in the B2B space, presenting offers that prospects are likely to act on is key. By learning user behavior and refining your digital experience, your marketing and sales approach is much more effective. In terms of B2B lead generation, this means using key metrics to identify your most valuable buyer personas.
In order to perform validation, you need data. More specifically, you need data with the information that you want to predict. We call this information the ground truth. Ground truth is usually provided by humans. In our processes, we believe that the ground truth is the actual value that we want to predict for our data. The adventure of validation begins once you have both your predictor and the data with ground truth. If the data with ground truth was not present to your development process, it is easy.
In this note, we’ll cover gradient descent algorithm and its variants: Batch Gradient Descent, Mini-batch Gradient Descent, and Stochastic Gradient Descent. Gradient Descent is the most common optimization algorithm in machine learning and deep learning. It is a first-order optimization algorithm. This means it only takes into account the first derivative when performing the updates on the parameters. Let’s first see how gradient descent and its associated steps work on logistic regression before going into the details of its variants.
There are many cases where projects that supposed to be very small, but development kept going for years; projects that designed as “fire and forget” ended up being very important for the organization. This is why even your shortest code should have good naming. If you never thought about better naming, hope after reading this you will try naming your entities better, and see how it improves the quality of your code. The habit of naming better might seem hard to build first. You may not want to spend your time on finding better names. However this is a habit that pays back. You should practice it even if the code you write is a prototype, or part of a tiny project.
There is lots of excitement about analytics and machine learning. The improvements in analytics, AI, and machine learning are amazing, but they provide you with numbers, not answers if they don’t solve business problems. It’s moving through its hype-cycle but still faces many challenges. Don’t get too excited about deploying an analytics solution. Make sure you know what business problems you want to solve. And make sure the solution helps you solve them. Real-Time analytics provides value at the point of activity within your current workflow or process.
Cognitive search offers the potential for dramatic improvements in the accuracy, relevance, and efficiency of insight discovery. Although some see cognitive search as simply traditional search enhanced by machine learning and artificial intelligence, there is actually a sophisticated combination of capabilities that make it distinct from and superior to traditional enterprise search. The cognitive search goes well beyond search engines to bring together myriad data sources, along with sophisticated tagging automation and personalization, vastly improving how an organization’s employees find, discover and access the information they need to do their jobs.
Artificial Intelligence (AI) is hardly in its infancy stage. It is not that accessible and thus the AI based solutions that we are using or are being deployed are far inferior to what we expect in the next two to three decades. Two AI bots could often fight each other if pitted that way, and this fight could last for centuries the way we look at AI right now. All this indicates that AI needs strong leadership; a leader who can set the direction, control and govern the very innovation.
The supply chain in the pharmaceutical industry is complex, with drugs changing ownership from manufacturers to distributors, repackagers, and wholesalers before reaching the customer. Consequences include the counterfeit drug problem and inefficient processes for conducting recalls and returns processing. These inefficiencies result in financial losses and loss of trust with consumers. The blockchain could be an opportunity platform to increase trust and transparency, with customers being able to track pharmaceutical products throughout the supply chain. Only trusted parties are granted access to write on the blockchain.
Most business problems can’t be turned into a game, however; you have more than two players and no clear rules. The outcomes of business decisions are rarely a clear win or loss, and there are far too many variables. So it’s a lot more difficult for businesses to implement AI than it seems. AI is advancing rapidly and will surely make it easier to clean up and integrate data. But business leaders will still need to understand what it really does and create a vision for its use. That is when they will see the big benefits.