facebook-pixel

Kemal Yesilbek

About Me

Kemal Tugrul Yesilbek, data scientist at Lone Rooftop, is focused on machine learning and data science practices. He published multiple research papers on machine learning and its applications in academic journals and conferences. He is experienced in building machine learning solutions from idea to operation.

A Great Pitfall: Neglecting Validation

In order to perform validation, you need data. More specifically, you need data with the information that you want to predict. We call this information the ground truth. Ground truth is usually provided by humans. In our processes, we believe that the ground truth is the actual value that we want to predict for our data. The adventure of validation begins once you have both your predictor and the data with ground truth. If the data with ground truth was not present to your development process, it is easy. 

The Effect of Naming in Data Science Code

There are many cases where projects that supposed to be very small, but development kept going for years; projects that designed as “fire and forget” ended up being very important for the organization. This is why even your shortest code should have good naming. If you never thought about better naming, hope after reading this you will try naming your entities better, and see how it improves the quality of your code. The habit of naming better might seem hard to build first. You may not want to spend your time on finding better names. However this is a habit that pays back. You should practice it even if the code you write is a prototype, or part of a tiny project.

Iteratively Finding a Good Machine Learning Model

There is a theorem telling us there is no single machine learning method that performs best in all problems. So how do we find the best one that fits our needs? This post suggests that before going into complex methods and spending time on fine-tuning your deep learning model, try simple ones. As you gear up towards more complex methods, you may find that simple one is sufficient for your needs. No matter how complicated or simple a method is, it will not perform best for all the problems.

When Even a Human is Not Good Enough as Artificial Intelligence

While some things are easy to measure, intelligence is not one of them. Intelligence is a very abstract and complex thing to measure. How do we, people, perceive artificial intelligence and what are our expectations from it? It seems when people judge AI’s ability, we are harsh. We want AI to be perfect. We do not show the flexibility we provide to human mistakes proportionally to AIs. We want AI to be “like a human”. Whatever the reason is, it is a common pattern. We expect artificial intelligence to be comparable to human intelligence.

The Harvard Innovation Lab

Made in Boston @

The Harvard Innovation Lab

350

Matching Providers

Matching providers 2
comments powered by Disqus.