Browse Projects

305 Projects that match your criteria

Sort by:

Thumb 446df672 ce9e 4a5a 8e33 0496e9cbd374

Prototype to Capture Sound and Audio Attributes for Voice Analytics

Goal:

We are a Fortune 500 company in the Healthcare sector. We want to develop software or software and a device that will capture audio and save it to a non-audio format.  We want this software/device to capture data about audio that can be used for machine/deep learning projects.  The need to protect patient privacy is the key driver of this project’s main requirement – save audio data to a non-audio file.

Our Ideal Expert:

Someone with extensive experience programming audio software and/or someone with a strong understanding of acoustics – for example, an Acoustical Engineer or someone with expertise in voice analytics.

The Requirements:

Our initial thought is to develop software that would run on a Raspberry Pi.  This is not a hard requirement, but it would be a useful starting point for a prototype.  We are open to an expert’s advice if you want to suggest off-the-shelf hardware that could be used.

The software developed will capture audio using an omni directional microphone.  The captured audio will not be written to hard disk in an audio file format.  Instead the attributes of the audio listed below, will be written to a CSV file.  Please note, the audio features listed below are only the minimum required features.  We will default to the expert’s advice for additional features to capture from a voice analytics perspective.

Features:

  • Mel Frequency cepstrum coefficients
  • Spectral slope
  • Audio spectral flatness
  • Audio spectral centroid
  • Audio spectral envelope
  • Any additional audio features that may help in the identification of sound

The CSV file’s rows will represent time increments (e.g. one second) and should be written as a timestamp. The file’s columns will be the audio features listed above.  An important requirement is that we as users of the software/device be able to adjust the increments for sound capture.  We also need to be able to adjust the intervals in which the data is captured.  For example, we may want to set the device to record one second of sound every two minutes.

The software/device will need the ability to connect to a Wi-Fi network and transfer data collected to a destination specified by the user. The user will also need the ability to define the interval of data transfer.

Finally, the data recorded by this device should never be able to be converted into sound (i.e. audio could not be determined from the data captured).

At the end of the project the expert should produce full specifications for the hardware device and microphone used in development.

Proposals:

In your proposal please provide the following:

  1. Your suggestions on hardware and your specific approach to this problem.
  2. How will you ensure that the audio data is converted to delimited file and that no audio file is created in the process?
  3. What are your proposed milestones and how many hours should be budgeted for each?
  4. Your relevant experience with a project like this one.

Healthcare
Engineering and Design
Software and Web Development

$100/hr - $200/hr

1 Proposal Status: HIRING

Net 30

Company small

Client: F************************************

Posted: Feb 16, 2018

Thumb d5a1c369 78a2 427a 9ec1 0fd156ba3281

Develop a Big Data Strategy, Architecture and Roadmap for a Direct-to-Consumer Mass Personalization Platform in the Consumer Goods Sector

TASK

Create the list of potential data sources and conduct feasibility assessment of these data sources to determine if and how data science can be applied (separately and together) to grow revenue and reduce costs for retailers and consumer goods manufacturers.

 

 

COMPANY OVERVIEW

INS builds a scalable blockchain-based Direct-to-Consumer mass personalization platform for the consumer sector (FMCG, CPG, groceries). There are 3 involved parties: manufacturers, retailers, and consumers. Consumer goods manufacturer and retailers will be able to provide personalised offers directly to consumers (discounts, promotions, etc.) via mobile app based on consumer personal data (gathered online), consumer purchase history (gathered offline via POS on checkout), and other data sources (from manufacturers, retailers, etc).

 

 

PROBLEMS TO SOLVE

•          What data should be gathered?

•          How to use these data to increase sales in retail stores?

•          How to use these data to increase sales for selected consumer goods manufacturers?

•          How to align interests of consumer goods manufacturers and retailers?

•          How to make personalised offers (discounts, rewards, etc.) for consumers with best-in-class personalisation and success rate (customers view offers via a mobile app and purchase promoted items in offline stores)

 

 

WHO WE NEED

•          Data scientist with 3+ years of experience in building mass market products

•          Deep understanding of the retail sector

•          Deep understanding of the consumer goods sector

•          Out-of-the-box and creative mindset

 

 

DATA SOURCES TO BE USED

•          Consumer personal data

•          Consumer social networks data

•          Data from a consumer goods manufacturer

•          Data from a retail store

•          Any other data sources that can be useful (weather forecast, holidays schedule, traffic situation, location-specific data, ect)

 

 

TECHNOLOGY

•          Ruby on Rails

•          Hadoop

•          Any other relevant technologies (we're open to the most advanced and cutting-edge solutions; we haven't started the development yet)

 

 

DELIVERABLE

Advisory service on developing a big data strategy, architecture and roadmap.

Consumer Goods and Retail
Media and Advertising
Customer Loyalty

$150/hr - $250/hr

Starts Feb 18, 2018

11 Proposals Status: HIRING

Company small

Client: I***

Posted: Feb 10, 2018

Thumb f94f3eda 4d6c 44d9 8a84 4e2597c07997

Build Segment Classification Model

Background: We have build need-based segments using survey data of b2b customers sourced from thousands of internal b2b database. There are six segments.

GOAL: Build predictive model to classify segment membership of all the b2b customers in the database (who were not part of the survey).

Data:

  1. Survey data with segments already identified.
  2. We will provide 2 years of internal transaction data. See sample of variables captured in the attached file. Output: Classification model to classify segment membership of ~29k B2B customers into one of the six segments. They need the code to run internally. 

Your task:

  1. Understand secondary data
  2. Build predictive model to classify internal customers to one of six segments with probabilities of being into each segment
  3. Provide code to client so that they can run it internally. The type of code will depend on client’s system (most likely python or sql etc. )

Please do not bid on this project as this will be awared to pre-selected expert.

Predictive Modeling
Analytics

$2,000 - $2,300

Starts Feb 12, 2018

1 Proposal Status: IN PROGRESS

Company small

Client: G***********************************

Posted: Feb 09, 2018

Thumb 39907eef 4cbb 445f 8c57 e5bc8f768be1

Recommendation Engine for Video Streaming Platform

Company Description

We are a young technology company that has pioneered online streaming over mobile devices. We enable our clients to seamlessly stream their content to the consumers though our myplex platform. Our services also include providing advisory to our clients and help them in optimizing their Acquisition and Retention strategy.

Problem Statement

Our vision is to create a best in industry recommendation engine which would aid the content discovery process for users globally by suggesting them content based on their likings and behaviour. The recommendation should be able to use the internally and externally available data about the users/content and leverage sophisticated machine learning models to arrive at relevant recommendations.

The recommendation engine should be able to address capabilities including but not limited to below points:

  1. Recommend relevant content to viewers based on their In App behaviour and Demographic details (in case of new user where no other data is available)
  2. Create user profile based on their content consumption history
  3. Account for recency of the behaviour. Eg: a content viewed in the last 24 hours could be given higher weightage compared to a content viewed 1 month back
  4. Provide recommendation based on the time of the day the content is Played/Browsed. Eg:A user might have a habit of consuming news content during the day whereas movies by the evening
  5. Provide higher weightage to content viewed for longer duration
  6. Capture pure browse behaviour – where the user browses but not consumed
  7. For new user the model should be able to learn the preferences of user by asking questions on language, Favourite Genre/Movie/Actor/Director etc
  8. Track user preference by the content s/he selects from all the content that is served as recommendation and build it into future recommendations
  9. Provide recommendation for different categories. Eg: Recommended for You, Similar Movies, Recommended Movies, Recommended TV Series, New Movies etc
  10. To further refine recommendations, along with relevance to user it could also consider overall popularity of the content through internal(usage) as well as external data (imdb, Rotten Tomatoes etc.)
  11. Should be able to extract tags for content using external database (Wikipedia, imbd, rotten tomatoes etc.)
  12. Our platforms handles millions of content streaming on a monthly basis hence the recommendation engine should be able to process the data and provide real time results
  13. Get the user behaviour data in the required format using the APIs provided by the App
  14. Should be capable of adding placeholder to manually add content to the carousals in addition to the recommendations when require
  15. Provide an interface(Portal as well as App) for users to specify their preferences and showcase recommendation based on the inputs shared by them

Data:

We have multiple clients using the platform and the data available might vary based on information made available by the client. Enclosed are the details of data availability. Our Data is stored in MySQL and Hadoop. We would also like to use other publically available data sources for enrichment. 

Questions that we have:

  • We have following Information Request/ questions to gather the required details for considering partnering with your organization to develop the aforementioned Recommendation Engine:
  • For which industry were the recommendation engine developed by you earlier? Please share broad description of underlying logic. Also we would like to explore one of the recommendation engines that you have deployed for your clients in the past.
  • Profile of experts that will be working on the project along with details of their qualification, past projects and Tools and technologies used earlier?
  • What is the technology stack that you would suggest for developing the recommendation engine?
  • What approach would you use to develop the recommendation engine?
  • How would you suggest to evaluate quality of recommendation?

Timelines

The project needs to be completed within 6 weeks

Please provide a list of milestones and a ballpark estimate of hours to complete each.

Hi-Tech
Telecommunications
Deep Learning

$70/hr - $150/hr

Starts Feb 12, 2018

9 Proposals Status: HIRING

Company small

Client: A*******************

Posted: Feb 05, 2018

Thumb 9a62959c 6ec9 443c 90be 56a8d86f6226

Development of Global Philanthropy Platform (Minimal Viable Product)

We are seeking proposals to engage the services of a development team able to develop and deploy a unique global online philanthropic venture (“the Vehicle”). This platform is to be built using a popular web development framwork--such as Rails, Django and NodeJS.

The Platform

A "pass-through" Vehicle that leverages the latest technology with best-in-class accountability and transparency practices to unleash the catalytic potential of Philanthropy giving donors an unprecedented choice to direct funds strategically and effectively towards the world’s major humanitarian and developmental challenges.

The Vehicle intends to improve and facilitate impact-driven bespoke reporting, with healthy competition for receipt of funds driving organisations to improve governance and the quality of impact reporting. This will in-turn stimulate and encourage further donations through the Vehicle.

The platform will ultimately utilise Big Data to pull information from multiple sources and generate Automatic Reports for the donors as well as the recipient agencies. It will also have advanced artificial intelligence (AI) and smart algorithms to perform the following:

  • Analyse data to showcase humanitarian needs based on humanitarian and development aid agencies input, which would feed-in and update the humanitarian priorities section on the platform in real time.
  • Analyse and display trends to make it easy for donors to identify causes and beneficiaries of choice.
  • Showcase the impact of the mass micro-donations of the retail donors.
  • Generate reports on donor trends, which will assist beneficiary agencies to position their appeals based on donor interests and requirements.
  • Provide real-time and balanced exposure to all humanitarian needs around the world, enabling donors to make informed decisions on most pressing issues and needs.

Scope of This Effort

For this current project, we are seeking to establish a strong technical basis in the form of a "minimal viable product" (MVP) which will allow us to demonstrate the core capabilities of the system to manage both donors and charities, to accept donations for subsequent distribution, and to perform basic reporting functionality. If this MVP is successful then subsequent projects will focus on incorporating more advanced capabilities as described above. 

We have developed a set of reference user experiences and use cases as well as a system architecture for the ultimate system (see the attached documents) but expect that the developer will work with the sponsors to adapt this design as appropriate to ensure that we meet our very tight schedules. Our emphasis for this project is to develop and deploy a fieldable system, knowing that we will incorporate additional functionality as we grow.  Subsequent projects will focus on the analytics and machine learning aspects of the project.

At a high level, this project will result in a system which accomplishes the following:

  • A browser based desktop application able to support donors, charities and the platform administrators 
  • Secure user management capabilities for donors, charities and platform administrators
  • A content management capability for representing the charities and associated projects 
  • An integrated payment system able to accept and track donations
  • An appropriate business intelligence tool for providing reports to both the charities as well as the platform manager. This visualization and querying tool could be constructed using with existing tools (e.g. Tableau), through the AWS hosted system, QuickSite, or perhaps using an approach based upon custom development, e.g. D3.
  • The architecture will be designed to leverage Amazon Web Services as appropriate.
  • A successful completion of this project will result in a system that has been deployed on AWS 

More details are provided in the accompanying requirements document, however, to reiterate we do not intend to construct the entire system described in this document under the project being staffed through this RFP. 

PLEASE NOTE: the results of this development MUST be completed and deployed by April 15, 2018

Proposal Requirements

In your proposal, please provide:

  • Previous work that you have done that is relevant (please include URLs of live systems and not of your profile on other marketplaces);
  • How you would approach this development exercise; 
  • Proposed milestones;
  • Estimated hours and budget.

You can look at the two attached documents and propose which features you can deliver by the April 15 deadline for the MVP.  Proposals that do not address the above requirements will not be considered.

Non-Profit
Web Programming
Software and Web Development

$150,000 - $200,000

16 Proposals Status: HIRING

Net 30

Company small

Client: P*******

Posted: Feb 02, 2018

Thumb b90cc3f9 0921 42d3 9744 5865ab33d3df

WorkFusion Development - Create Human Tasks to existing Business Process

We are creating credit investment data to help analysts automate the required data sets for proper underwriting of investments.

We use WorkFusion (WF) to read, parse and extract data from legal agreements that are received in pdf format.

We have built Phase 1 of a WF business process that breaks out the key sections of the legal agreement (ie Cover Page, Recitals, Table of Contents, Defined Terms, Sections) using parsing and regular expressions.

We now need to build 5 human tasks by segmenting specific sections and then providing automated extraction where possible.  

We have 500 total legal agreements that need to be run through this process. All work should be developed directly in the WorkFusion platform/Business Process

Attached is a screenshot of the current Business Process that needs to be worked with.

In your proposal please answer the following questions:

  • Have you worked with WorkFusion in the past? Describe your experience using WorkFusion.
  • What is your comfort level with Java?
  • How many years of RPA experience do you have?
Financial Services
Robotic Process Automation
WorkFusion

$50/hr - $60/hr

Starts Feb 05, 2018

3 Proposals Status: CLOSED

Company small

Client: K*******************

Posted: Jan 31, 2018

Thumb 0e3ca4e8 9af6 46e5 902b d69f4a8b0730

Ethereum Smart Contract Audit For ICO

We are an innovative company based in Tokyo that focuses on cryptocurrency assets and blockchain technology.

Currently we are launching an ICO for an ERC20 token. The token comes with a fixed amount available to investors and the corressponding smart contracts have basic ERC20 functionality. Furthermore, features related to withdrawal and liquidation of the token itself are added. The token is part of the financial product and has a guarenteed value due to its underlying asset.

The contracts are currently deployed on Rinkeby testnet and can be used there. We need a smart contract audit to test and verify all its functionality and make the contract safe from any exploits. Additioally, we require a gas analysis for each function call that can be made utilizing the Ethereum blockchain.

The project timeline foresees about 3 weeks for the completion of the audit including iterative feedback integration.

Only companies able to provide security expertise for smart contracts should apply for this project since we need official proof of the audit demonstrable to the investors.

Deliverables:

  1. Code review of the the Solidty files. This requires working closely together with the developper to understand all the functionality of the contracts.
  2. Comprehensive analysis of the long term goals of the contract. Advice on rollover to new contracts after the funding period has run out.
  3. Gas analysis. Minimum requirements for each call and (where possible) a maximum gas limit listed for the functions.
  4. Security: Iterative procedure working with the developers to stop any third party from aquiring tokens without our consent. Needs to cover all previous succesfull hacking attempts of ERC20 tokens.
  5. Finally we would need a certificate for the security audit including proofs.
Financial Services
Economic Modeling
Finance

$15,000 - $20,000

Starts Feb 19, 2018

6 Proposals Status: HIRING

Company small

Client: O***********

Posted: Jan 29, 2018

Thumb 3685eb79 4c64 4d6f 97e0 5918b4ced330

Social Signal Detection Using NLP & Text Analytics

A SUMMARY OF OUR BUSINESS:

Big Spaceship is a creative agency focused on leveraging cultural intelligence to solve key business problems for our partners. We recently won OMMA’s Agency of the Year and work with industry leaders - including JetBlue, Starbucks, Google, and Hasbro. We have 115 employees, with one centralized office in Brooklyn.

 

THE PROBLEM WE’RE TRYING TO SOLVE:

Big Spaceship is working with various brands to better detect trends before they enter mainstream internet culture. In order to get at the forefront of these trends, we have set up different “tribes” or communities to monitor. These tribes are made up of a defined set of Twitter users who we’ve manually categorized based on target audiences relevant to our clients (e.g. “Millennial Parents”). We are using Crimson Hexagon - a social listening platform with direct access to the Twitter API - to track and monitor conversations generated from these tribes in real-time.

 

To reach “trends” we must first identify significant terms (words or phrases). We’re defining significant as anomalous based on historical data from within the tribe’s user set and anomalous in comparison to the general population. Therefore each tribe’s data must be compared to itself and a general population tribe to determine what is significant to that tribe alone.

 

Our challenge is that we have no automated way to detect trends within these tribes in real-time. We believe there are two potential approaches, but welcome other solutions:

 

Potential Approach 1: Word/Phrase Indexing

  • Analyze term usage at a user level (i.e. proportion of users that posting tweets containing the given word out of the total set of users, e.g. 35% of users used the word “candle”)

  • Slice data into set intervals (e.g. daily, every 3 days, weekly, etc.)

  • Establish a rolling baseline of term usage based on previous data (e.g. 30 days, 90 days, etc.)

  • Index term usage against this rolling baseline accounting for variance within the baseline range

  • Index term usage against the general population to subtract general trends in term usage

  • Identify anomalous words/phrases and raise to user.

Potential Approach 2: Streaming Topic Model

    • Based around Liang, Yilmaz, Kanoulas’ paper “Dynamic Clustering of Streaming Short Documents”

    • Implement a Dynamic Clustering Topic Model (DCT) their proposed variation of Latent Dirichlet Allocation with one topic per tweet and a dynamic topic model based on time on our data for each tribe

    • Slice data into set intervals (e.g. daily, every 3 days, weekly, etc.)

    • Establish topic distributions within each tribe at each time interval

    • Establish a rolling baseline of topic distributions based on previous data (e.g. 30 days, 90 days, etc.)

    • Index topic distribution against this rolling baseline accounting for variance within the baseline

    • Index topic distribution against the general population to subtract general trends in term usage

    • Identify anomalous topics and raise to user. Users can easily label topics based on the context of the terms within them

 

From this analysis we would likely need daily exports of these top terms or topics in the form of CSVs relevant to each tribe.

 

THE KIND OF EXPERTISE REQUIRED:

Natural Language Processing

  • Topic Modeling

  • Tokenization

  • etc.

Anomaly detection

Unsupervised Learning

Neural Networks (optional)

Data Storage/Management

 

DATA SOURCES & FORMATS:

We expect to have 10-15 tribes with 500-2000 tweets per day. Each tribe will have a monitor in Crimson Hexagon, tweets links are pulled from Crimson and then tweet content is pulled from Twitter API. We will collect and store this data daily for analysis.

 

CURRENT TECH STACK:

Python 3.0 (Required)

  • Pandas

  • NumPy

  • SciPy

  • Scikit Learn

  • Gensim

  • Tensor Flow

  • Peewee

PostgreSQL - Google CloudSQL (Flexible)

Spark (if necessary)

 

BID:

For our bidding process, we would like experts to submit an outline of their approach, a rationale explaining why that approach is the right solution, existing references they’ll use to support their approach (e.g. published white papers outlining an approach for a similar problem), and an estimate of hours. Our hourly rate will range between $100 - $200 for this project.

 

DELIVERABLE:

A replicable approach to detecting the emergence of trends within ongoing conversations, with thorough documentation describing the general methodology used.

 

LOCATION PREFERENCE:

We would like a collaborate working model by which the candidate would work either onsite in Brooklyn or within the Eastern Time Zone alongside our in-house data scientist and analysts iteratively.

 

SAMPLE DATASET:

Crimson Hexagon /posts endpoint with Twitter Output (JSON):

{

   "posts": [

       {

           "url": "http://twitter.com/mirl/status/882700164401692672",

           "title": "",

           "type": "Twitter",

           "location": "VA, USA",

           "geolocation": {

               "id": "USA.VA",

               "name": "Virginia",

               "country": "USA",

               "state": "VA"

           },

           "language": "en",

           "assignedCategoryId": 4763388608,

           "assignedEmotionId": 4763388602,

           "categoryScores": [

               {

                   "categoryId": 4763388606,

                   "categoryName": "Basic Negative",

                   "score": 0

               },

               {

                   "categoryId": 4763388610,

                   "categoryName": "Basic Positive",

                   "score": 0

               },

               {

                   "categoryId": 4763388608,

                   "categoryName": "Basic Neutral",

                   "score": 1

               }

           ],

           "emotionScores": [

               {

                   "emotionId": 4763388602,

                   "emotionName": "Neutral",

                   "score": 0.86

               },

               {

                   "emotionId": 4763388603,

                   "emotionName": "Sadness",

                   "score": 0.01

               },

               {

                   "emotionId": 4763388607,

                   "emotionName": "Surprise",

                   "score": 0

               },

               {

                   "emotionId": 4763388604,

                   "emotionName": "Fear",

                   "score": 0

               },

               {

                   "emotionId": 4763388605,

                   "emotionName": "Disgust",

                   "score": 0

               },

               {

                   "emotionId": 4763388611,

                   "emotionName": "Anger",

                   "score": 0

               },

               {

                   "emotionId": 4763388609,

                   "emotionName": "Joy",

                   "score": 0.12

               }

           ],

           "imageInfo": [

               {

                   "url": "http://pbs.twimg.com/media/DD_7HvpWsAEHq4E.jpg"

               }

           ]

       }

   ],

 “totalPostsAvailable”: 1,

 “status”: “success”

}

Example of the Twitter API /statuses/lookup endpoint (JSON):

[

 {

   "created_at": "Tue Mar 21 20:50:14 +0000 2006",

   "id": 20,

   "id_str": "20",

   "text": "just setting up my twttr",

   "source": "web",

   "truncated": false,

   "in_reply_to_status_id": null,

   "in_reply_to_status_id_str": null,

   "in_reply_to_user_id": null,

   "in_reply_to_user_id_str": null,

   "in_reply_to_screen_name": null,

   "user": {

     "id": 12,

     "id_str": "12",

     "name": "Jack Dorsey",

     "screen_name": "jack",

     "location": "California",

     "description": "",

     "url": null,

     "entities": {

       "description": {

         "urls": []

       }

     },

     "protected": false,

     "followers_count": 2577282,

     "friends_count": 1085,

     "listed_count": 23163,

     "created_at": "Tue Mar 21 20:50:14 +0000 2006",

     "favourites_count": 2449,

     "utc_offset": -25200,

     "time_zone": "Pacific Time (US & Canada)",

     "geo_enabled": true,

     "verified": true,

     "statuses_count": 14447,

     "lang": "en",

     "contributors_enabled": false,

     "is_translator": false,

     "is_translation_enabled": false,

     "profile_background_color": "EBEBEB",

     "profile_background_image_url": "http://abs.twimg.com/images/themes/theme7/bg.gif",

     "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme7/bg.gif",

     "profile_background_tile": false,

     "profile_image_url": "http://pbs.twimg.com/profile_images/448483168580947968/pL4ejHy4_normal.jpeg",

     "profile_image_url_https": "https://pbs.twimg.com/profile_images/448483168580947968/pL4ejHy4_normal.jpeg",

     "profile_banner_url": "https://pbs.twimg.com/profile_banners/12/1347981542",

     "profile_link_color": "990000",

     "profile_sidebar_border_color": "DFDFDF",

     "profile_sidebar_fill_color": "F3F3F3",

     "profile_text_color": "333333",

     "profile_use_background_image": true,

     "default_profile": false,

     "default_profile_image": false,

     "following": true,

     "follow_request_sent": false,

     "notifications": false

   },

   "geo": null,

   "coordinates": null,

   "place": null,

   "contributors": null,

   "retweet_count": 23936,

   "favorite_count": 21879,

   "entities": {

     "hashtags": [],

     "symbols": [],

     "urls": [],

     "user_mentions": []

   },

   "favorited": false,

   "retweeted": false,

   "lang": "en"

 },

 {

   "created_at": "Sun Feb 09 23:25:34 +0000 2014",

   "id": 432656548536401920,

   "id_str": "432656548536401920",

   "text": "POST statuses/update. Great way to start. https://t.co/9S8YO69xzf (disclaimer, this was not posted via the API).",

   "source": "web",

   "truncated": false,

   "in_reply_to_status_id": null,

   "in_reply_to_status_id_str": null,

   "in_reply_to_user_id": null,

   "in_reply_to_user_id_str": null,

   "in_reply_to_screen_name": null,

   "user": {

     "id": 2244994945,

     "id_str": "2244994945",

     "name": "TwitterDev",

     "screen_name": "TwitterDev",

     "location": "Internet",

     "description": "Developers and Platform Relations @Twitter. We are developers advocates. We can't answer all your questions, but we listen to all of them!",

     "url": "https://t.co/66w26cua1O",

     "entities": {

       "url": {

         "urls": [

           {

             "url": "https://t.co/66w26cua1O",

             "expanded_url": "/",

             "display_url": "dev.twitter.com",

             "indices": [

               0,

               23

             ]

           }

         ]

       },

       "description": {

         "urls": []

       }

     },

     "protected": false,

     "followers_count": 3147,

     "friends_count": 909,

     "listed_count": 53,

     "created_at": "Sat Dec 14 04:35:55 +0000 2013",

     "favourites_count": 61,

     "utc_offset": -25200,

     "time_zone": "Pacific Time (US & Canada)",

     "geo_enabled": false,

     "verified": true,

     "statuses_count": 217,

     "lang": "en",

     "contributors_enabled": false,

     "is_translator": false,

     "is_translation_enabled": false,

     "profile_background_color": "FFFFFF",

     "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png",

     "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png",

     "profile_background_tile": false,

     "profile_image_url": "http://pbs.twimg.com/profile_images/431949550836662272/A6Ck-0Gx_normal.png",

     "profile_image_url_https": "https://pbs.twimg.com/profile_images/431949550836662272/A6Ck-0Gx_normal.png",

     "profile_banner_url": "https://pbs.twimg.com/profile_banners/2244994945/1391977747",

     "profile_link_color": "0084B4",

     "profile_sidebar_border_color": "FFFFFF",

     "profile_sidebar_fill_color": "DDEEF6",

     "profile_text_color": "333333",

     "profile_use_background_image": false,

     "default_profile": false,

     "default_profile_image": false,

     "following": true,

     "follow_request_sent": false,

     "notifications": false

   },

   "geo": null,

   "coordinates": null,

   "place": null,

   "contributors": null,

   "retweet_count": 1,

   "favorite_count": 5,

   "entities": {

     "hashtags": [],

     "symbols": [],

     "urls": [

       {

         "url": "https://t.co/9S8YO69xzf",

         "expanded_url": "/docs/api/1.1/post/statuses/update",

         "display_url": "dev.twitter.com/docs/api/1.1/p…",

         "indices": [

           42,

           65

         ]

       }

     ],

     "user_mentions": []

   },

   "favorited": false,

   "retweeted": false,

   "possibly_sensitive": false,

   "lang": "en"

 }

]

Media and Advertising
Product Development
Social Media Research

$100/hr - $200/hr

Starts Feb 05, 2018

11 Proposals Status: HIRING

Company small

Client: B*************

Posted: Jan 23, 2018

Thumb b080d6af 2cbb 4d4d ade4 94a3777c48da

Latent Class Segmentation on Survey Data

Perform latent class segmentation on survey data to identify meaningful, actionable, and targetable segments based on needs, attitudes, and usage of telco services and technology in general. 

Specific tasks:

1. Perform latent class segmentation on survey data. 

2. Build predictive algorithm to predict the segment membership of internal customer database. Essentially, build segmentation on survey data and then assign segment membership of internal database

Kind of resource:

1. Expert in latent class segmentation on survey data and customer transaction data

2. Must have executed multiple projects in latent class segmentation and building typing or predictive tool to assign segment membership 

Analytics

$70/hr - $150/hr

Starts Feb 05, 2018

14 Proposals Status: IN PROGRESS

Company small

Client: G***********************************

Posted: Jan 17, 2018

Thumb aae63a2e 7a62 4101 8e47 2fd991ac1f8e

Data Quality Anomaly Detection and Suggestion Engine

Summary

We are a large IT infrastructure organization looking to improve the quality of our operational infrastructure monitoring data.  Our goal with this project is to develop an API that will detect anomalies in single tables of structured data using a combination unsupervised machine learning methods and defined rules and suggest new values for anomalous data.  The API will support a larger project which includes the visualization of these anomalies in a dashboard, however this Experfy project will focus on the API and ML modelling.

Scope of Work

The selected expert will be responsible for:

Defining an API to provide anomaly detection and suggestion services

Developing an unsupervised learning model to detect anomalous data related to each of 24 Key Business Elements (KBEs) in our data

Coding logic to detect additional anomalies according to predefined rules for each KBE

Implementing an the defined API with the completed model and ML implementation

Demonstrating the robustness of the model using various test data sets, including data with both similar (tech/infrastructure) and dissimilar (Fisher’s iris flowers, etc.) contexts

Supporting QA and visualizer dashboard development efforts as bugs or issues are discovered in the API or model

The primary output of this project is an API for detecting anomalies in our infrastructure operations data.

The attached presentation provides additional details around the data, environment, and project requirements and gives additional context to the broader project scope (including the visualization dashboard project).  Since, this project focuses on the data anomaly detection engine only, project details out of scope for this Experfy project posting have been greyed out for scope clarity, however they are still very relevant to your implementation.

Challenge Format

We plan to hire more than one expert to implement their model using a common initial data set.  The different approaches will be evaluated after initial implementation and only one expert will be asked to continue with the project refining their model and build the API.  The period for determining which approach will be used (and who will complete the final project deliverable) will be variable but is expected to last 1-2 weeks.

Proposal

As part of your proposal please answer the following questions:

Please describe the approach you intend to use to solve this problem (please describe both anomaly detection and value suggestion).

What trade-offs you are making when choosing one approach over another?

Which technology stack would you use for this challenge?

What are the underlying assumptions you are making about the data set for this proposal?

How would you approach in tuning the parameters for the chosen approach

How do you plan to evaluate the performance of the model?

How do you plan to develop the API?

Hi-Tech
Machine Learning
Analytics

$25,000 - $35,000

Starts Jan 05, 2018

14 Proposals Status: IN PROGRESS

Net 60

Company small

Client: C*******

Posted: Dec 29, 2017

350

Matching Providers

Matching providers 2