facebook-pixel

381 Projects that match your criteria

Sort by:

Mobile Analytics Dashboarding for an App Developer

Spark is a small mobile app development company that sells a group app combining the capabilities of Slack, WhatsApp, and Outlook in single tool. Spark markets its app currently to 2 verticals, youth sports teams and Greek Life chapters. Spark will be add approx. 10 niche verticals per month and requires an analytics monitoring system that is extensible to over 100 verticals within the next 90 days.

The full scope of our marketing effort includes the following:

  • Marketing Channels: E-mail, Direct Mail, Telemarketing, Social Media, Referral Programs 
  • Existing Tools: Salesforce.com, Pardot, App Annie, Google Analytics, SimilarWeb.com 
  • Current reporting capabilities: .csv file aggregation as time permits, MS Excel roll-ups 
  • Current App Count: 5 
  • App installations: ~60k App 
  • Platforms: iOS and Android 
  • 14-Day user count: ~10k 
  • Affinity Group Count: ~5k

Project Goal

Develop a web-based analytics dashboard utilizing near real-time data to inform strategic management decisions pertaining to:

  • Resource Allocations
  • Product development priorities
  • Marketing plans
  • Operational Priorities
  • Design and Communication decisions

Dashboard Components will include specifics for the following business functionality.

  1. Outbound Marketing – First Contact to Admin Enrollment
  2. Website Usage – User engagement stats with vertical segmentation differentials
  3. Onboarding Process – Stage Metrics (Admin Enrollment request through Group Adoption and usage)
  4. 2nd Strike - Follow up Marketing and Support Activities (Sales/Support Engagements, Push Messaging)
  5. Customer Satisfaction – App utilization, Overall and Feature-by-feature
  6. Referral – Recommend referees, Monitor referrals
  7. Cross Marketing – Determine and target cross demographic vertical opportunities.

We are seeking someone to provide us with a roadmap and hands-on implementation that will lead to the dashboard above.  In your proposal, please provide your specific experience working on such projects and how you think you can help us. 

Hi-Tech
Customer Acquisition Modeling
Sentiment Analysis

$80/hr - $100/hr

Starts Jun 21, 2016

12 Proposals Status: COMPLETED

Client: F***** ***** ****

Posted: Feb 19, 2016

Entity Extraction and Content Curation from Profiles of Experts in Biotechnology

We are a startup in the process of building a database of biotechnology experts.  We have identified the following five sources with existing expert profiles:

  1. www.clinicaltrials.gov
  2. researchgate
  3. NIH: https://projectreporter.nih.gov/reporter_summary.cfm
  4. Stanford Med.  https://med.stanford.edu/profiles/browse?affiliations=capFaculty
  5. Harvard Catalyst.  https://connects.catalyst.harvard.edu/Profiles/search/default.aspx?showcolumns=1&searchtype=people&otherfilters=

We would like to extract the following information to import into our database:

  1. Name
  2. Title 
  3. Institution
  4. Specialty/Focus/Interest
  5. Bio/Training
  6. Publications
  7. Photo, if available

Please review the sources of content and provide your approach and ballpark estimate in hours to import 10,000 user profiles into our database. Either SQL server or MySQL are acceptable.

Pharmaceutical and Life Sciences
Information Extraction Web Scraping System
Software and Web Development

$75/hr - $150/hr

Starts Feb 15, 2016

18 Proposals Status: CLOSED

Client: E***************

Posted: Feb 10, 2016

Net Profitability Modeling and Dashboarding Using Tableau

Through our seven wholesale branches and three luxury showrooms, we are in the business of wholesaling top-quality plumbing and other products to our customers with a level of service recognized throughout the industry as exceptional.

We seek a BI expert from Experfy to produce a model for us to build some net profit analysis reports in Tableau.

Attached you will find the header lines for the data that we submit.  The “Sales Data” tab contains the data that is pulled directly from our SQL database.  We pull this information and submit monthly.  The Expenses tab shows the header lines from the reports we submit showing our expenses.  This information is pulled, calculated monthly except for the ‘customer rebate’ and ‘outside sales wages’, which are calculated monthly.

Details by Tab:

Sales Data:

  • Invoice Header: Details the per invoice sales data
  • Invoice Lines: Details the per line sales data
  • Direct Shipment: Identifies Direct ship invoices – allocation of expenses differs for these invoices
  • Customer: Details customer information
  • Users: Details writer and sales rep information
  • Product: Details product information
  • Category Map: Profides a hierarchy of our product distribution (Market, Group, Price Line)
  • Vendor: Details Supplier data

This data needs to be linked to provide various multidimensional reports based on Customer, Product, Branch, SalesRep and or Writer.  This is a very simple model to build and we already utilizing this in a number of ways.

Expenses: calculated by Branch (price or shipping)

GL Code: Summary of GL expenses into simple work functions that will be assigned to the associated sales function

Credit Card Fee: Summary of credit card fees earned by customer

Customer Discount: Summary of payment terms discounts taken by customer

Purchases Discount and Rebate: Two separate reports summarizing the purchase discounts taken and rebates earned by price line.

Outside Sales Wages:  Total annual earnings summarized by outside sales rep

Customer Rebate:  Summary of rebates paid out by customer.

The expenses file will need to be developed for each year.  With the allocation calculation levels being ‘brought forward’ into the current period if there is no data available.  For example; fiscal year expenses calculations from 2015 will be used to calculate and allocate expenses in 2016 until we add data to the expense model at the end of Q1 2016.  2015 Sales would utilize 2015 expense calculations, allowing for year over year comparisons.

The expenses will be allocated to the sales data following these rules.

CoGS-Other: Calculated and assigned as a % of COGS per price branch

Fleet Expenses:  This represents our internal trucking expenses.  Calculation and assigning these expenses are the most involved.  We need to determine a per shipment charge for each shipping branch.  A shipment will be defined as all invoices delivered to a customer on a single ship via on a single day.  So four invoices, shipped to a single customer from one branch on one day on the same ship via would count as one shipment.  So the individual branches Fleet charge would be calculated by dividing the total expense value by the calculated total number of shipments as defined above.  Assuming that the Fleet charge is Forty dollars per shipment, the forty dollars would need to be divided amongst the four invoices, prorated by the total CoGS of the invoices

Shipping and Delivery:  This line is similar to Fleet expenses above, only it represents 3rd party freight providers

G&A Assigned: this is a total by branch of the fixed expenses.  These expenses will need to be allocated by invoice by price branch

Order Entry Expense:  Total sales order entry expenses per branch to be calculated and assigned per line keyed.

Selling Expense: By price branch assigned to sales as a % of COGS

Warehouse and Inventory Expense:  Total warehouse expenses per shipping branch and assigned per line picked.

Credit Card Fees, Customer Discounts and Customer Rebates will be assigned to the specific customer’s accounts as a percent of their sales

Purchase Discount and Rebate will be applied as a percent of the COGS at the price line level

Outside Sales Wages – calculated as a % of GP$ earned and assigned to the invoices headers that the Rep is named as the Outside Salesperson.

Once these two areas are merged, I will need to be able to create reports showing the total sales, cogs, rebate, expense for each line of sales data.  Items that are assigned at the invoice level would need to be assigned on a prorated basis to the individual lines on the invoice, using COGS as the basis.  This will allow us to calculate, at an extreme granular basis, the net profits earned on every single product for every individual customer.  We should also then be able to build up summary reports at the Writer, sales rep, branch level, as well as at the product, price line, and buyline level.  The inclusion of date data on the expense level would permit the calculation of year over year delta reports. 

Some key reports we are utilizing now include:

  • Multi-Dimensional P&L report at the Rep, Customer, Branch, or Product level.  The summary tab/X axis would be defined by the parameter chosen (Ship date, branch etc) (see Base Report Tab for example).
  • Ranking reports by customer, rep, product and price line.
  • Year over year delta reports (Customer, rep, product, branch etc)
  • Summary Reports of Top/Bottom 5 accounts filtered by any of the above parameters.

I have included a sample of one of the Multi-dimensional P&L reports that we produce currently on the third tab of the excel sheet. 

I am thinking that we would utilize these data sets to generate a monthly ‘flat file’ that would provide the base for running our traditional reports off of. 

The deliverable for this project would be Tableau dashboards that replace all third-party tools we are using. We are already using Tableau as one our primary reporting tools. It is connected directly to our ERP system.

Please provide us with an estimate of what this would take in terms of effort and cost. Please also provide your approach and past experience.  Given this is a relatively large project, we would prefer that it be performed in phases, starting with a proof of concept.

Consumer Goods and Retail
Dashboards & Scorecards
Data Visualization

$75/hr - $150/hr

Starts Mar 11, 2016

23 Proposals Status: COMPLETED

Client: M***** ****** ****

Posted: Feb 10, 2016

Resume Scoring Algorithm

We source and hire thousands of independent contractors for our clients. We have a rigorous test-based application process that all candidates go through. Given the data we have collected after vetting over one hundred thousand applicants, we'd like to start finding correlations between resume/linked-in profile attributes and test performance. The goal is that we would be experts at identifying the candidates that are most likely to do well on our tests.

In order to conduct a proof of concept, we'd like to start by providing you with data for three of our Pipelines: Java Software Engineer, Java Chief Software Architect, and .Net Chief Software Architect. We have the test data and resumes for these candidates (14,000+) in AWS. We may also have some team members manually add their linked-in profiles to increase the amount of data you have to analyze. As far as the actual format of the data, we'd like to discuss the best way for you to get it. You can talk to our engineering team to finalize those details. Regarding output, we would want you to identify the top indicators of a successful Java Software Engineer candidate. We'd like for you to consider a multitude of factors (skills, job history, location, work experience, among others). We can ennumerate that list of factors along with you when we discuss this in more detail.

If we prove the concept of this theory we'd definitely pursue the same analysis for many more pipelines and ultimately try to implement a closed-loop system that allows us to learn from the performance of all of our candidates. 

Here is a small sample of what the data we give you would look like: https://docs.google.com/spreadsheets/d/18Xvt4ib9mTrxgrcNWpJQboQq8HMi6Snn4IMGhJy6xlk/edit#gid=1291488262

Hi-Tech
Talent Aquisition Modeling
Human Resources

$150/hr

Starts Feb 12, 2016

19 Proposals Status: COMPLETED

Client: C*********

Posted: Feb 04, 2016

Provide clear concise narrative of 250+graphs/charts

Overview: We need clear narrative (text descriptions) of 150-300 words per page based on around 250+ graphs/charts into clear concise format, with any insights, value-adds, commentary/explainer of that chart so user can read the text and use the chart as a visual cue. You could also call it 'REPORT WRITING' ‘Secondary data research’ or any such cross-function.

Note: We'll provide ALL the data, graphs, charts, notes, backgrounder and clarify where we can.  No primary research or data collection or creating data in BI/Excel etc is needed. Just writing up of what you see on  the chart nicely.

Sector: Its media & content market related divided into sections (TV, online, mobile, video, gaming etc)

See project files:Two examples – or what we call Exhibit. One has a pie chart with example of text, notes around of it how it will be . The other is data without any text. Essentially take 250-300 graphs/exhibits, all from Excel – some simple, some complex on a specific area or a macro-level and explain it better with any insight or context. The data will be given as PPT slide decks.

For example: Growth of IPTV from 2014 to 2016 – country-wise, with relevant data – standard pie charts, bar graphs etc. Distil that and write into nice text and this graph will be embedded in that report.

Deadline: 60 days from now (late March 2016).

Criteria: Must have experience from media/ad/market research to understand the landscape, trends.
-Strong editing, writing skills with analytics skills
-Able to work closely with remote team over email, extranet, Skype etc as required
-Able to focus and deliver quality work as per defined timelines, milestones.
-Edit, revise as required coordinate with our report designer (it will be printed later), translator

Media and Advertising
Market Research
Customer Analytics

$85/hr - $150/hr

Starts Jan 25, 2016

23 Proposals Status: CLOSED

Client: c******* *********

Posted: Jan 25, 2016

Development of a Resume Scoring Algorithm

Background

We are a provider of eRecruitment technology which is used by our clients to manage the workflow of recruiting new hires including the following steps: posting vacancies, providing online application forms, integration of recruitment tests, communication with candidates etc.

For graduate hire processes we have developed a structured online form which captures job applicants’ biographical details such as education scores (GPA, SAT, ACT, GMAT), work experience history and also leadership and other achievements.  Our clients’ HR managers and recruiters use this information to shortlist a subset of applicants for interview.

The leadership and achievements section of the application form asks applicants to provide details of a maximum of ten important achievements or leadership experience gained through their studies, work experience or extra-curricular activities.  This information is provided by candidates selecting one of nineteen categories (e.g. national sports award, leadership position in university society), providing free text information on the name of the award or position, a separate free text box for the organisation/society name, and selecting from a prepopulated list other details such as university name and country.

We have created a target list of important achievements/leadership positions which are deemed prestigious or important by recruiters.  The free-text achievement data is pre-processed by matching via regex against this large target set of academic, sporting and leadership achievements which are considered desirable in an applicant.   Each matched achievement is given a descriptive meta-tag, for example “<sports award>”.  In addition some specific achievements will be tagged with an occurrence category (pre-university, undergraduate studies, etc.), university name, and/or country name.

 

In summary the data for each candidate usually includes the following:

Education scores:  University grade point average (GPA) for current course, pre-university national scores for the SAT or ACT tests.  

Work Experience information: Each employment or work experience usually includes employer name, job title, start and end dates, duration, self-selected category from internship/ work placement / permanent / temporary job etc.

Leadership and achievements:  free text matched against a target list of achievements.

The data can be provided in csv or xlsx format, containing a mix of numerical and text data for approx 50,000 applicants.

 

 

Goals

 We need assistance in developing the following:

An algorithm that can be used to predict the likelihood of an applicant being offered an interview and being offered a job.

A scoring or weighting system that can rank candidates in order of likely interview success.

The development of weightings for particular types of achievements as flagged by the meta-tags e.g. university sports awards, national community service awards.

Recommendations on longer term big data approaches to this problem including skills, applications and technologies.

One potential complexity is that candidates do not provide all relevant information.  For example education test scores such as SAT are missing, or the candidate has studied at a non-US university that does not provide GPA scores but education scores in another format.  So ideally a solution should account for missing data for example by not penalising applicants with an international education or qualifications.

Another complexity is that the algorithm should not adversely discriminate against applicants on the basis of gender or ethnicity.

The broader aim of the project is to develop a transparent scoring system that can be used to rank and ultimately predict the employability of recent graduates.  A key feature is that the weightings or scores for individual achievements are transparent and can be displayed via our recruitment system to our clients’ recruitment managers.  Hence our approach of creating a list of targeted achievements. 

However we recognize that this may not be the best approach and would welcome advice on the types of infrastructure and applications that we should be using for the longer term, for example whether using natural language processing and machine learning would be more appropriate.   For such purposes we have a significantly larger dataset (approx 10 million) of applicant CVs/resumes (in a less structured format) to analyse.

 

 

Deliverables

The ranking algorithm would need to be implemented within our proprietary system. It is likely the ranking algorithm solution is in Python or Perl.  Alternatives may be acceptable subject to discussion with our Technical Director.

Milestones/deadline

We are looking for a working algorithm that we can implement during Q2 2016.

 

 

Please see attachment for structured achievement forms. 

 

 

Professional Services
Job Applicant Scoring
Human Resources

$8,000 - $12,000

32 Proposals Status: IN PROGRESS

Client: W*** ***

Posted: Jan 22, 2016

Forecasting Sales for New Burger King Resturant Sites

>>>TO APPLY MUST BE FLUENT IN SPANISH<<<

I am a Franchisee and Development Manager of a company with several activities that include investing in and operating Burger King Restaurants. Now we are a franchisee with the highest growth rate in Mexico, opening 15 new restaurants every year.

The restaurants that I have opened recently are not achieveng the sales that I forecasted, so the investments in new restaurants are getting a low or negative return. We have a lot of data for each restaurant, its influence area and the traffic of vehicles and people in front of the restaurant, but we don´t have a tool to manage the data and draw insights for our business development.

The data sources at our disposal are socio-demographic information, traffic counts of vehicles and people, surveys of the people who live around or pass through the area, geomarketing data about competitors and traffic generators. The data is all in Excel.

I attached the final report that the development manager delivers to me, but it´s so to create each one and the data manipulation in Excel, which does not work well. He has to manage many Excel files for each report. I also attach the file with the data of the vehicle and people counts.

The deliverable may be an interactive dashboard for forecasting sales of new sites based on the data and sales of the operating restaurants. Please suggest how you would tackle this problem, what tools you would use, and how you may automate this process.

Consumer Goods and Retail
Sales Forecasting
Territory Analysis

$5,000 - $10,000

Starts Mar 01, 2016

16 Proposals Status: CLOSED

Client: C*********** ***

Posted: Jan 20, 2016

Develop Data Visualization Tool To Replace Our Current Trending Briefs

Problem: We currently provide our small business syndicated clients with research reports that show trending information on a quarterly basis.  These reports are generated through the use of a “crosstabs” software and manual input to create each given line graph in PowerPoint. A copy of the current report template can be found attached to this job description. There are about 40 of these reports that are manually developed each quarter and we would like to convert these from strictly hard copy reports to either a less labor-intensive process of  creating the reports using R or to a much greater extent  interactive reports  built on R that our clients could interact with using Shiny Server.

 

A standard version of the trending brief will have three lines: All survey respondents, Respondents from Top 11 Banks and Respondents from an individual bank.

 

The Kind of Expertise You Require:  Data Visualization expert that has experience using weighted survey data , R, Macintosh OS X Server, Filemaker Database Server with ODBC data connections

 

Data Sources: The unweighted survey data is currently housed on a FileMaker Database server and analytic tools access data through ODBC or PHP connections but we have the ability to convert into other file types if necessary (e.g. .csv, .xls, .spss).  

Weights are currently housed in excel sheets separate from survey data and would need to be applied to make data representative of the population.

 

Current Technology Stack:  Macintosh OS X 10.11 Server, Apache and PHP5 used for custom web access, Filemaker Database Server 14, ODBC, Survey Reporter Professional (Windows), SPSS (Windows),

For this project, plan to add R and Shiny Server (if necessary) to implement new project.

 

Deliverable: A data visualization tool built on R that provides similar output to what is seen in our trending brief. Training into how to use the tool and update for new information each quarter.

Professional Services
Data Visualization
R

$100/hr - $200/hr

37 Proposals Status: CLOSED

Client: B****** ******** *********** ****

Posted: Jan 19, 2016

Machine Learning Python-Based Imaging Pipeline

PROJECT UPDATED MARCH 15th, 2016

We are a health technology startup focused on building machine-learning driven classifiers for medical imaging.  We are currently training using the world’s diagnostic image largest (by at least 50x) training set.  While an enormous treasure trove, this training set introduces unique challenges as well.  We hope resource(s) from Experfy might help us overcome some of these challenges.

Our official runs are conducted on Amazon EC2 instances hitting S3 storage.  Most of our experimental and exploratory research agenda is conducted on a local development environment -- an 8-core i7 + 3*TitanX + 64GB RAM hitting a QNAP NAS via NFS.  

We are seeking external help on three fronts:

  1. Software Architecture Enhancements
  2. Hardware Setup Validation
  3. Neural Network Architecture and General ML Advisory

Software Architecture Enhancements

Our training set is 14 terabytes -- approximately 1 million images of 3000x3000 resolution.  Extracting images from DICOM medical data files also takes time.  We’d like to speed up non-core parts of the training cycle and pipeline.

  • Obviously the entire set cannot be staged in the execution environment, so images are brought over from the NAS as needed, this is slow.  Python multiprocessing is not helping beyond 5 threads, despite a mostly underutilized machine.  We’d like to architect to better utilize the hardware.
  • We would like to implement any enhancements which could cut down the image processing time.
  • We would like to consider more intelligent ways of process this (perhaps as three separate threads running in waves, pre-processing soon-to-be-needed images.) 

Neural Network Architecture and General ML Advisory

Our problem is not easily shoehorned into any of the existing problems in ML-driven image classification.   Specifically, we have several complexities that preclude out-of-box conv-net approaches:

  1. Our images are big, 3000x3000.  It is debatable whether resolution reductions would preserve the features which define classes.
  2. While we only have two classes (normal, abnormal), the abnormal class can be any of about five-dozen feature types
  3. The features defining class membership (abnormal specifically) are not prominent on any of the images, they are usually small percentages of the overall image
  4. The features defining class membership (abnormal specifically) are not consistent in size

We’re currently using two approaches: Support Vector Machines and Deep Neural Networks (specifically convolutional networks, variants of AlexNet.) We have a prioritized research path we’re following, but we’re very interested in variations, enhancements, and any out-of-box ideas.

Hardware Setup Validation

We think we’ve set up our hardware pretty well, but there are obviously some bottlenecks.  Our guess is that we’re network-constrained currently

We’re in the process of:

  1. Direct connection of computer to QNAP NAS via cross-over cable
  2. Port trunking (2x) QNAP NAS to switch
  3. Port trunking (2x) computer to switch
  4. Exploring NFS alternatives such as QNAP http-based web server

We’re open to expert advice on intelligent tweaks.

Machine Learning
Deep Networks
deep learning

$100/hr - $250/hr

Starts Jan 18, 2016

28 Proposals Status: CLOSED

Client: D********** ***

Posted: Jan 15, 2016

Building Robust Model to Classify Segments and Price Risks for Auto Insurance

We are a start-up in the Auto Insurance space. We are looking for a robust modules to help us classify, segment, and price our personal auto insurance risks with confidence. The modules should provide output we can use as stand-alone analytics.

We have no data since we are a start-up. However, we think we can leverage on some government data.

The main goal is to use personal data found on social networking sites such as Facebook, LinkedIn, Twitter, GPA, University attended, SAT score, career path and income to assess a consumer’s Insurance risk other than their credit scores and zip code. We believe online reputation and/or professional connections are factors that should be considered when extending credit or auto insurance coverage, especially to someone with a scant or spotty credit history who might otherwise have trouble getting a loan or auto insurance coverage.

Automotive
Financial Services
Market Segmentation and Targeting

$2,600

8 Proposals Status: CLOSED

Client: P*********

Posted: Jan 11, 2016

Matching Providers