facebook-pixel

381 Projects that match your criteria

Sort by:

Recommendation Engine for Video Streaming Platform

Company Description

We are a young technology company that has pioneered online streaming over mobile devices. We enable our clients to seamlessly stream their content to the consumers though our myplex platform. Our services also include providing advisory to our clients and help them in optimizing their Acquisition and Retention strategy.

Problem Statement

Our vision is to create a best in industry recommendation engine which would aid the content discovery process for users globally by suggesting them content based on their likings and behaviour. The recommendation should be able to use the internally and externally available data about the users/content and leverage sophisticated machine learning models to arrive at relevant recommendations.

The recommendation engine should be able to address capabilities including but not limited to below points:

  1. Recommend relevant content to viewers based on their In App behaviour and Demographic details (in case of new user where no other data is available)
  2. Create user profile based on their content consumption history
  3. Account for recency of the behaviour. Eg: a content viewed in the last 24 hours could be given higher weightage compared to a content viewed 1 month back
  4. Provide recommendation based on the time of the day the content is Played/Browsed. Eg:A user might have a habit of consuming news content during the day whereas movies by the evening
  5. Provide higher weightage to content viewed for longer duration
  6. Capture pure browse behaviour – where the user browses but not consumed
  7. For new user the model should be able to learn the preferences of user by asking questions on language, Favourite Genre/Movie/Actor/Director etc
  8. Track user preference by the content s/he selects from all the content that is served as recommendation and build it into future recommendations
  9. Provide recommendation for different categories. Eg: Recommended for You, Similar Movies, Recommended Movies, Recommended TV Series, New Movies etc
  10. To further refine recommendations, along with relevance to user it could also consider overall popularity of the content through internal(usage) as well as external data (imdb, Rotten Tomatoes etc.)
  11. Should be able to extract tags for content using external database (Wikipedia, imbd, rotten tomatoes etc.)
  12. Our platforms handles millions of content streaming on a monthly basis hence the recommendation engine should be able to process the data and provide real time results
  13. Get the user behaviour data in the required format using the APIs provided by the App
  14. Should be capable of adding placeholder to manually add content to the carousals in addition to the recommendations when require
  15. Provide an interface(Portal as well as App) for users to specify their preferences and showcase recommendation based on the inputs shared by them

Data:

We have multiple clients using the platform and the data available might vary based on information made available by the client. Enclosed are the details of data availability. Our Data is stored in MySQL and Hadoop. We would also like to use other publically available data sources for enrichment. 

Questions that we have:

  • We have following Information Request/ questions to gather the required details for considering partnering with your organization to develop the aforementioned Recommendation Engine:
  • For which industry were the recommendation engine developed by you earlier? Please share broad description of underlying logic. Also we would like to explore one of the recommendation engines that you have deployed for your clients in the past.
  • Profile of experts that will be working on the project along with details of their qualification, past projects and Tools and technologies used earlier?
  • What is the technology stack that you would suggest for developing the recommendation engine?
  • What approach would you use to develop the recommendation engine?
  • How would you suggest to evaluate quality of recommendation?

Timelines

The project needs to be completed within 6 weeks

Please provide a list of milestones and a ballpark estimate of hours to complete each.

Hi-Tech
Telecommunications
Deep Learning

$30,000

Starts Apr 04, 2018

14 Proposals Status: IN PROGRESS

Net 60

Client: A****** ************

Posted: Feb 05, 2018

Development of Global Philanthropy Platform (Minimal Viable Product)

We are seeking proposals to engage the services of a development team able to develop and deploy a unique global online philanthropic venture (“the Vehicle”). This platform is to be built using a popular web development framwork--such as Rails, Django and NodeJS.

The Platform

A "pass-through" Vehicle that leverages the latest technology with best-in-class accountability and transparency practices to unleash the catalytic potential of Philanthropy giving donors an unprecedented choice to direct funds strategically and effectively towards the world’s major humanitarian and developmental challenges.

The Vehicle intends to improve and facilitate impact-driven bespoke reporting, with healthy competition for receipt of funds driving organisations to improve governance and the quality of impact reporting. This will in-turn stimulate and encourage further donations through the Vehicle.

The platform will ultimately utilise Big Data to pull information from multiple sources and generate Automatic Reports for the donors as well as the recipient agencies. It will also have advanced artificial intelligence (AI) and smart algorithms to perform the following:

  • Analyse data to showcase humanitarian needs based on humanitarian and development aid agencies input, which would feed-in and update the humanitarian priorities section on the platform in real time.
  • Analyse and display trends to make it easy for donors to identify causes and beneficiaries of choice.
  • Showcase the impact of the mass micro-donations of the retail donors.
  • Generate reports on donor trends, which will assist beneficiary agencies to position their appeals based on donor interests and requirements.
  • Provide real-time and balanced exposure to all humanitarian needs around the world, enabling donors to make informed decisions on most pressing issues and needs.

Scope of This Effort

For this current project, we are seeking to establish a strong technical basis in the form of a "minimal viable product" (MVP) which will allow us to demonstrate the core capabilities of the system to manage both donors and charities, to accept donations for subsequent distribution, and to perform basic reporting functionality. If this MVP is successful then subsequent projects will focus on incorporating more advanced capabilities as described above. 

We have developed a set of reference user experiences and use cases as well as a system architecture for the ultimate system (see the attached documents) but expect that the developer will work with the sponsors to adapt this design as appropriate to ensure that we meet our very tight schedules. Our emphasis for this project is to develop and deploy a fieldable system, knowing that we will incorporate additional functionality as we grow.  Subsequent projects will focus on the analytics and machine learning aspects of the project.

At a high level, this project will result in a system which accomplishes the following:

  • A browser based desktop application able to support donors, charities and the platform administrators 
  • Secure user management capabilities for donors, charities and platform administrators
  • A content management capability for representing the charities and associated projects 
  • An integrated payment system able to accept and track donations
  • An appropriate business intelligence tool for providing reports to both the charities as well as the platform manager. This visualization and querying tool could be constructed using with existing tools (e.g. Tableau), through the AWS hosted system, QuickSite, or perhaps using an approach based upon custom development, e.g. D3.
  • The architecture will be designed to leverage Amazon Web Services as appropriate.
  • A successful completion of this project will result in a system that has been deployed on AWS 

More details are provided in the accompanying requirements document, however, to reiterate we do not intend to construct the entire system described in this document under the project being staffed through this RFP. 

PLEASE NOTE: the results of this development MUST be completed and deployed by April 15, 2018

Proposal Requirements

In your proposal, please provide:

  • Previous work that you have done that is relevant (please include URLs of live systems and not of your profile on other marketplaces);
  • How you would approach this development exercise; 
  • Proposed milestones;
  • Estimated hours and budget.

You can look at the two attached documents and propose which features you can deliver by the April 15 deadline for the MVP.  Proposals that do not address the above requirements will not be considered.

Non-Profit
Web Programming
Software and Web Development

$150,000 - $200,000

17 Proposals Status: CLOSED

Net 30

Client: P*******

Posted: Feb 02, 2018

WorkFusion Development - Create Human Tasks to existing Business Process

We are creating credit investment data to help analysts automate the required data sets for proper underwriting of investments.

We use WorkFusion (WF) to read, parse and extract data from legal agreements that are received in pdf format.

We have built Phase 1 of a WF business process that breaks out the key sections of the legal agreement (ie Cover Page, Recitals, Table of Contents, Defined Terms, Sections) using parsing and regular expressions.

We now need to build 5 human tasks by segmenting specific sections and then providing automated extraction where possible.  

We have 500 total legal agreements that need to be run through this process. All work should be developed directly in the WorkFusion platform/Business Process

Attached is a screenshot of the current Business Process that needs to be worked with.

In your proposal please answer the following questions:

  • Have you worked with WorkFusion in the past? Describe your experience using WorkFusion.
  • What is your comfort level with Java?
  • How many years of RPA experience do you have?
Financial Services
Robotic Process Automation
WorkFusion

$50/hr - $60/hr

Starts Feb 05, 2018

3 Proposals Status: CLOSED

Client: K****** ************

Posted: Jan 31, 2018

Ethereum Smart Contract Audit For ICO

== ON HOLD UNTIL FURTHER NOTICE ==

We are an innovative company based in Tokyo that focuses on cryptocurrency assets and blockchain technology.

Currently we are launching an ICO for an ERC20 token. The token comes with a fixed amount available to investors and the corressponding smart contracts have basic ERC20 functionality. Furthermore, features related to withdrawal and liquidation of the token itself are added. The token is part of the financial product and has a guarenteed value due to its underlying asset.

The contracts are currently deployed on Rinkeby testnet and can be used there. We need a smart contract audit to test and verify all its functionality and make the contract safe from any exploits. Additioally, we require a gas analysis for each function call that can be made utilizing the Ethereum blockchain.

The project timeline foresees about 3 weeks for the completion of the audit including iterative feedback integration.

Only companies able to provide security expertise for smart contracts should apply for this project since we need official proof of the audit demonstrable to the investors.

Deliverables:

  1. Code review of the the Solidty files. This requires working closely together with the developper to understand all the functionality of the contracts.
  2. Comprehensive analysis of the long term goals of the contract. Advice on rollover to new contracts after the funding period has run out.
  3. Gas analysis. Minimum requirements for each call and (where possible) a maximum gas limit listed for the functions.
  4. Security: Iterative procedure working with the developers to stop any third party from aquiring tokens without our consent. Needs to cover all previous succesfull hacking attempts of ERC20 tokens.
  5. Finally we would need a certificate for the security audit including proofs.
Financial Services
Economic Modeling
Finance

$15,000 - $20,000

Starts Apr 23, 2018

6 Proposals Status: CLOSED

Client: O****** ****

Posted: Jan 29, 2018

Social Signal Detection Using NLP & Text Analytics

A SUMMARY OF OUR BUSINESS:

Big Spaceship is a creative agency focused on leveraging cultural intelligence to solve key business problems for our partners. We recently won OMMA’s Agency of the Year and work with industry leaders - including JetBlue, Starbucks, Google, and Hasbro. We have 115 employees, with one centralized office in Brooklyn.

 

THE PROBLEM WE’RE TRYING TO SOLVE:

Big Spaceship is working with various brands to better detect trends before they enter mainstream internet culture. In order to get at the forefront of these trends, we have set up different “tribes” or communities to monitor. These tribes are made up of a defined set of Twitter users who we’ve manually categorized based on target audiences relevant to our clients (e.g. “Millennial Parents”). We are using Crimson Hexagon - a social listening platform with direct access to the Twitter API - to track and monitor conversations generated from these tribes in real-time.

 

To reach “trends” we must first identify significant terms (words or phrases). We’re defining significant as anomalous based on historical data from within the tribe’s user set and anomalous in comparison to the general population. Therefore each tribe’s data must be compared to itself and a general population tribe to determine what is significant to that tribe alone.

 

Our challenge is that we have no automated way to detect trends within these tribes in real-time. We believe there are two potential approaches, but welcome other solutions:

 

Potential Approach 1: Word/Phrase Indexing

  • Analyze term usage at a user level (i.e. proportion of users that posting tweets containing the given word out of the total set of users, e.g. 35% of users used the word “candle”)

  • Slice data into set intervals (e.g. daily, every 3 days, weekly, etc.)

  • Establish a rolling baseline of term usage based on previous data (e.g. 30 days, 90 days, etc.)

  • Index term usage against this rolling baseline accounting for variance within the baseline range

  • Index term usage against the general population to subtract general trends in term usage

  • Identify anomalous words/phrases and raise to user.

Potential Approach 2: Streaming Topic Model

    • Based around Liang, Yilmaz, Kanoulas’ paper “Dynamic Clustering of Streaming Short Documents”

    • Implement a Dynamic Clustering Topic Model (DCT) their proposed variation of Latent Dirichlet Allocation with one topic per tweet and a dynamic topic model based on time on our data for each tribe

    • Slice data into set intervals (e.g. daily, every 3 days, weekly, etc.)

    • Establish topic distributions within each tribe at each time interval

    • Establish a rolling baseline of topic distributions based on previous data (e.g. 30 days, 90 days, etc.)

    • Index topic distribution against this rolling baseline accounting for variance within the baseline

    • Index topic distribution against the general population to subtract general trends in term usage

    • Identify anomalous topics and raise to user. Users can easily label topics based on the context of the terms within them

 

From this analysis we would likely need daily exports of these top terms or topics in the form of CSVs relevant to each tribe.

 

THE KIND OF EXPERTISE REQUIRED:

Natural Language Processing

  • Topic Modeling

  • Tokenization

  • etc.

Anomaly detection

Unsupervised Learning

Neural Networks (optional)

Data Storage/Management

 

DATA SOURCES & FORMATS:

We expect to have 10-15 tribes with 500-2000 tweets per day. Each tribe will have a monitor in Crimson Hexagon, tweets links are pulled from Crimson and then tweet content is pulled from Twitter API. We will collect and store this data daily for analysis.

 

CURRENT TECH STACK:

Python 3.0 (Required)

  • Pandas

  • NumPy

  • SciPy

  • Scikit Learn

  • Gensim

  • Tensor Flow

  • Peewee

PostgreSQL - Google CloudSQL (Flexible)

Spark (if necessary)

 

BID:

For our bidding process, we would like experts to submit an outline of their approach, a rationale explaining why that approach is the right solution, existing references they’ll use to support their approach (e.g. published white papers outlining an approach for a similar problem), and an estimate of hours. Our hourly rate will range between $100 - $200 for this project.

 

DELIVERABLE:

A replicable approach to detecting the emergence of trends within ongoing conversations, with thorough documentation describing the general methodology used.

 

LOCATION PREFERENCE:

We would like a collaborate working model by which the candidate would work either onsite in Brooklyn or within the Eastern Time Zone alongside our in-house data scientist and analysts iteratively.

 

SAMPLE DATASET:

Crimson Hexagon /posts endpoint with Twitter Output (JSON):

{

   "posts": [

       {

           "url": "http://twitter.com/mirl/status/882700164401692672",

           "title": "",

           "type": "Twitter",

           "location": "VA, USA",

           "geolocation": {

               "id": "USA.VA",

               "name": "Virginia",

               "country": "USA",

               "state": "VA"

           },

           "language": "en",

           "assignedCategoryId": 4763388608,

           "assignedEmotionId": 4763388602,

           "categoryScores": [

               {

                   "categoryId": 4763388606,

                   "categoryName": "Basic Negative",

                   "score": 0

               },

               {

                   "categoryId": 4763388610,

                   "categoryName": "Basic Positive",

                   "score": 0

               },

               {

                   "categoryId": 4763388608,

                   "categoryName": "Basic Neutral",

                   "score": 1

               }

           ],

           "emotionScores": [

               {

                   "emotionId": 4763388602,

                   "emotionName": "Neutral",

                   "score": 0.86

               },

               {

                   "emotionId": 4763388603,

                   "emotionName": "Sadness",

                   "score": 0.01

               },

               {

                   "emotionId": 4763388607,

                   "emotionName": "Surprise",

                   "score": 0

               },

               {

                   "emotionId": 4763388604,

                   "emotionName": "Fear",

                   "score": 0

               },

               {

                   "emotionId": 4763388605,

                   "emotionName": "Disgust",

                   "score": 0

               },

               {

                   "emotionId": 4763388611,

                   "emotionName": "Anger",

                   "score": 0

               },

               {

                   "emotionId": 4763388609,

                   "emotionName": "Joy",

                   "score": 0.12

               }

           ],

           "imageInfo": [

               {

                   "url": "http://pbs.twimg.com/media/DD_7HvpWsAEHq4E.jpg"

               }

           ]

       }

   ],

 “totalPostsAvailable”: 1,

 “status”: “success”

}

Example of the Twitter API /statuses/lookup endpoint (JSON):

[

 {

   "created_at": "Tue Mar 21 20:50:14 +0000 2006",

   "id": 20,

   "id_str": "20",

   "text": "just setting up my twttr",

   "source": "web",

   "truncated": false,

   "in_reply_to_status_id": null,

   "in_reply_to_status_id_str": null,

   "in_reply_to_user_id": null,

   "in_reply_to_user_id_str": null,

   "in_reply_to_screen_name": null,

   "user": {

     "id": 12,

     "id_str": "12",

     "name": "Jack Dorsey",

     "screen_name": "jack",

     "location": "California",

     "description": "",

     "url": null,

     "entities": {

       "description": {

         "urls": []

       }

     },

     "protected": false,

     "followers_count": 2577282,

     "friends_count": 1085,

     "listed_count": 23163,

     "created_at": "Tue Mar 21 20:50:14 +0000 2006",

     "favourites_count": 2449,

     "utc_offset": -25200,

     "time_zone": "Pacific Time (US & Canada)",

     "geo_enabled": true,

     "verified": true,

     "statuses_count": 14447,

     "lang": "en",

     "contributors_enabled": false,

     "is_translator": false,

     "is_translation_enabled": false,

     "profile_background_color": "EBEBEB",

     "profile_background_image_url": "http://abs.twimg.com/images/themes/theme7/bg.gif",

     "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme7/bg.gif",

     "profile_background_tile": false,

     "profile_image_url": "http://pbs.twimg.com/profile_images/448483168580947968/pL4ejHy4_normal.jpeg",

     "profile_image_url_https": "https://pbs.twimg.com/profile_images/448483168580947968/pL4ejHy4_normal.jpeg",

     "profile_banner_url": "https://pbs.twimg.com/profile_banners/12/1347981542",

     "profile_link_color": "990000",

     "profile_sidebar_border_color": "DFDFDF",

     "profile_sidebar_fill_color": "F3F3F3",

     "profile_text_color": "333333",

     "profile_use_background_image": true,

     "default_profile": false,

     "default_profile_image": false,

     "following": true,

     "follow_request_sent": false,

     "notifications": false

   },

   "geo": null,

   "coordinates": null,

   "place": null,

   "contributors": null,

   "retweet_count": 23936,

   "favorite_count": 21879,

   "entities": {

     "hashtags": [],

     "symbols": [],

     "urls": [],

     "user_mentions": []

   },

   "favorited": false,

   "retweeted": false,

   "lang": "en"

 },

 {

   "created_at": "Sun Feb 09 23:25:34 +0000 2014",

   "id": 432656548536401920,

   "id_str": "432656548536401920",

   "text": "POST statuses/update. Great way to start. https://t.co/9S8YO69xzf (disclaimer, this was not posted via the API).",

   "source": "web",

   "truncated": false,

   "in_reply_to_status_id": null,

   "in_reply_to_status_id_str": null,

   "in_reply_to_user_id": null,

   "in_reply_to_user_id_str": null,

   "in_reply_to_screen_name": null,

   "user": {

     "id": 2244994945,

     "id_str": "2244994945",

     "name": "TwitterDev",

     "screen_name": "TwitterDev",

     "location": "Internet",

     "description": "Developers and Platform Relations @Twitter. We are developers advocates. We can't answer all your questions, but we listen to all of them!",

     "url": "https://t.co/66w26cua1O",

     "entities": {

       "url": {

         "urls": [

           {

             "url": "https://t.co/66w26cua1O",

             "expanded_url": "/",

             "display_url": "dev.twitter.com",

             "indices": [

               0,

               23

             ]

           }

         ]

       },

       "description": {

         "urls": []

       }

     },

     "protected": false,

     "followers_count": 3147,

     "friends_count": 909,

     "listed_count": 53,

     "created_at": "Sat Dec 14 04:35:55 +0000 2013",

     "favourites_count": 61,

     "utc_offset": -25200,

     "time_zone": "Pacific Time (US & Canada)",

     "geo_enabled": false,

     "verified": true,

     "statuses_count": 217,

     "lang": "en",

     "contributors_enabled": false,

     "is_translator": false,

     "is_translation_enabled": false,

     "profile_background_color": "FFFFFF",

     "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png",

     "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png",

     "profile_background_tile": false,

     "profile_image_url": "http://pbs.twimg.com/profile_images/431949550836662272/A6Ck-0Gx_normal.png",

     "profile_image_url_https": "https://pbs.twimg.com/profile_images/431949550836662272/A6Ck-0Gx_normal.png",

     "profile_banner_url": "https://pbs.twimg.com/profile_banners/2244994945/1391977747",

     "profile_link_color": "0084B4",

     "profile_sidebar_border_color": "FFFFFF",

     "profile_sidebar_fill_color": "DDEEF6",

     "profile_text_color": "333333",

     "profile_use_background_image": false,

     "default_profile": false,

     "default_profile_image": false,

     "following": true,

     "follow_request_sent": false,

     "notifications": false

   },

   "geo": null,

   "coordinates": null,

   "place": null,

   "contributors": null,

   "retweet_count": 1,

   "favorite_count": 5,

   "entities": {

     "hashtags": [],

     "symbols": [],

     "urls": [

       {

         "url": "https://t.co/9S8YO69xzf",

         "expanded_url": "/docs/api/1.1/post/statuses/update",

         "display_url": "dev.twitter.com/docs/api/1.1/p…",

         "indices": [

           42,

           65

         ]

       }

     ],

     "user_mentions": []

   },

   "favorited": false,

   "retweeted": false,

   "possibly_sensitive": false,

   "lang": "en"

 }

]

Media and Advertising
Product Development
Social Media Research

$100/hr - $200/hr

Starts Feb 05, 2018

12 Proposals Status: COMPLETED

Client: B*** *********

Posted: Jan 23, 2018

Latent Class Segmentation on Survey Data

Perform latent class segmentation on survey data to identify meaningful, actionable, and targetable segments based on needs, attitudes, and usage of telco services and technology in general. 

Specific tasks:

1. Perform latent class segmentation on survey data. 

2. Build predictive algorithm to predict the segment membership of internal customer database. Essentially, build segmentation on survey data and then assign segment membership of internal database

Kind of resource:

1. Expert in latent class segmentation on survey data and customer transaction data

2. Must have executed multiple projects in latent class segmentation and building typing or predictive tool to assign segment membership 

Analytics

$70/hr - $150/hr

Starts Feb 05, 2018

14 Proposals Status: COMPLETED

Client: G****** ********* ********** *******

Posted: Jan 17, 2018

Data Quality Anomaly Detection and Suggestion Engine

Summary

We are a large IT infrastructure organization looking to improve the quality of our operational infrastructure monitoring data.  Our goal with this project is to develop an API that will detect anomalies in single tables of structured data using a combination unsupervised machine learning methods and defined rules and suggest new values for anomalous data.  The API will support a larger project which includes the visualization of these anomalies in a dashboard, however this Experfy project will focus on the API and ML modelling.

Scope of Work

The selected expert will be responsible for:

Defining an API to provide anomaly detection and suggestion services

Developing an unsupervised learning model to detect anomalous data related to each of 24 Key Business Elements (KBEs) in our data

Coding logic to detect additional anomalies according to predefined rules for each KBE

Implementing an the defined API with the completed model and ML implementation

Demonstrating the robustness of the model using various test data sets, including data with both similar (tech/infrastructure) and dissimilar (Fisher’s iris flowers, etc.) contexts

Supporting QA and visualizer dashboard development efforts as bugs or issues are discovered in the API or model

The primary output of this project is an API for detecting anomalies in our infrastructure operations data.

The attached presentation provides additional details around the data, environment, and project requirements and gives additional context to the broader project scope (including the visualization dashboard project).  Since, this project focuses on the data anomaly detection engine only, project details out of scope for this Experfy project posting have been greyed out for scope clarity, however they are still very relevant to your implementation.

Challenge Format

We plan to hire more than one expert to implement their model using a common initial data set.  The different approaches will be evaluated after initial implementation and only one expert will be asked to continue with the project refining their model and build the API.  The period for determining which approach will be used (and who will complete the final project deliverable) will be variable but is expected to last 1-2 weeks.

Proposal

As part of your proposal please answer the following questions:

Please describe the approach you intend to use to solve this problem (please describe both anomaly detection and value suggestion).

What trade-offs you are making when choosing one approach over another?

Which technology stack would you use for this challenge?

What are the underlying assumptions you are making about the data set for this proposal?

How would you approach in tuning the parameters for the chosen approach

How do you plan to evaluate the performance of the model?

How do you plan to develop the API?

Hi-Tech
Machine Learning
Analytics

$25,000 - $35,000

Starts Jan 05, 2018

17 Proposals Status: IN PROGRESS

Net 60

Client: C*******

Posted: Dec 29, 2017

DUI Arrest Logs Entity Extraction

We are a Northern California based law firm looking to create an application that will scrape the DUI arrests from arrest logs on these 18 county sites, then scrape the Breeze database and then identify the records that match to build a single file of records found on both websites. The application needs to run once a week and save the record to a webserver. Then when the record is available, it should email the admin with a link to download the record. Sample file enclosed.

The expert for this project has already been selected.

Legal
Software and Web Development

$100/hr - $200/hr

Starts Jan 10, 2018

5 Proposals Status: IN PROGRESS

Client: G******* *** **********

Posted: Dec 12, 2017

Cassandra Expert (Travel Required)

Over the past 140 years, Ericsson has been at the forefront of communications technology. Today, we are committed to maximizing customer value by continuously evolving our business portfolio and leading the ICT industry.

Ericsson is looking to hire an expert in Cassandra data model optimization and deployment architecture. We are looking for someone with deep experience with all aspects of Cassandra including schema design, compaction, migration as well as deployment.

  • Must have over 5 years experience with database systems. 
  • Must be familiar with public cloud providers including Amazon and Azure.

The contract will require some face-time in the Boston to get information about the existing deployment. The consultant must also provide support to troubleshoot performance issues until the targeted performance KPIs are achieved for a defined workload.

Please ensure your Experfy profile is up to date.

Hi-Tech
Telecommunications
Data Engineering

$100/hr - $300/hr

Starts Jan 15, 2018

9 Proposals Status: CLOSED

Client: E********

Posted: Dec 11, 2017

Create Customer-specific Pricing Algorithm Based On Historical Pricing Data

Project Overview:

We are looking for assistance in refining our quoting by creating a pricing algorithm. The end deliverable will be a custom (per customer per material/supplier) pricing model which maximizes revenue and minimizes costs for 1) labor and 2) material.

Company Profile:

We are a local DFW, TX sand and gravel trucking broker primarily serving the construction industry. We do not own trucks nor employ drivers.  We work with owner/operators to complete orders.  

We have two main input costs – 1) material (sand, gravel, rock, etc.) and 2) trucking costs (paying the owner/operators for their trucking services).  Trucking (labor) makes up approximately 80% of our revenue with the remaining 20% derived from material sales.

These two input costs are marked up a certain percentage and passed onto our customers. All jobs will incur trucking costs; only a subset of jobs involve materials costs too.

Data Source:

We have recently invested significant time and resources into a custom quoting, dispatch, and job completion system hosted on Amazon Web Services.  

The MySQL database houses all the quoting information and actual deliveries completed (successful quotes).

Current quoting methodology is based off drive time distance/time) for the labor and material costs from local pits/dumps (marked up a certain percentage).  

We would like to refine our pricing methodology based off individual customers (greater discount to high volume customers, higher mark up to riskier customers, higher costs for slower/less sophisticated customers) maximizing our revenue and minimizing our costs.

Deliverable:

The deliverable would integrate into our MySQL database and refine itself over time. The algorithm should take into consideration volumes/seasonality and cover the following areas:

1. Trucking costs – provide an estimated trucking cost to minimize cost on a per unit basis (ex per ton, cubic yard, load, hour) given different truck types (sizes). We are thinking about eventually allowing drivers to bid on each project to furhter refine our pricing. We utilize several different truck sizes which, all other things being equal, would result in a different trucking cost for different size trucks.

Problem to solve: what’s the lowest price we can pay our truckers to successfully find truckers to perform the work?

2. Material costs – material costs are “fixed” by our suppliers (local pits and dumps with whom we have purchasing accounts with) and priced by either the ton, cubic yard, or load. The algorithm could suggest lower cost alternatives which may help win the bid but may be a further distance away from the job site.

Problem to solve: are there suppliers which may be located a little further from the job site, but would offer a lower total cost to the customer (with a similar material quality)?


3. Trucking + Materials (when applicable) revenue – provide estimated values to maximize our chance of winning bids on a per customer basis. This should be a flexible calculation based on full and partial truck loads. When trucking costs and material costs are priced in the same unit of measure, the customer will only be presented with a single price quote; when trucking costs and material costs are priced in different units of measure, the customer will see separate price quotes for both materials and labor.

Problem to solve: Considering credit risk, what is a total, all-in cost for the customer to maximize our revenue and win the bid?

4. Credit Risk/Quality of Customer – items which maybe be used in determining credit risk

  • average size ($$$) of invoices (higher average better)
  • total billed revenue (more better)
  • time as customer (longer better)
  • Days Sales Outstanding (lower better) – how slow does this customer pay; our terms are typically N30.

5. Seasonality – Supply and demand. Rainy season = lower demand / higher (available) supply. Ability exists to pay the driver less when demand is lower.

6. Actual vs estimated time spent on each job – currently out labor cost estimates are based off estimated time to complete. Shortly we will be rolling out the ability to track precisely the time it takes to complete each job. This information will need to be planned for and included in any deliverable.

Applicants preferred from New Zealand, Australia, and the United States.

Transportation and Warehousing
Finance
Pricing and Actuarial

$5,000 - $15,000

Starts Jan 06, 2018

14 Proposals Status: IN PROGRESS

Client: A******** ********

Posted: Dec 08, 2017

Matching Providers