Crayondata hiring Data scientist

Job Description

  • Translate business requirements to a set of analytical models
  • Perform data analysis (with a representative sample data slice) and build/prototype model(s)
  • Work with business users and/or data scientists to formulate model designs using large data sets
  • Provide inputs to the data ingestion/engineering teams on data requirements for model(s), in terms of size, format, associations and cleansing
  • Identify/provide approaches and data to validate the model(s)
  • Collaborate with the data engineering team to transfer business understanding, and get model(s) productised
  • Validate output along with business users
  • Tune model(s) to improve results provided over time
  • Understand business challenges and goals of clients to formulate approaches for data analysis and model creation to support their business decision making
  • Perform hands-on data analysis and model creation
  • Work in highly collaborative teams that strive to build quality systems and provide business value
  • Mentor junior team members

Requirements

  • Understand business problems and address them by leveraging data, characterized by high volume and dimensionality, from multiple sources
  • Communicate complex models and analysis in a clear and precise manner
  • Build predictive statistical, behavioural or other models via supervised and unsupervised machine learning, statistical analysis, and other predictive modelling techniques.
  • Display a strong understanding of various types of recommender systems, like collaborative filtering, content-based filtering, association rule mining, etc.
  • Understand unstructured (text) data processing and NLP
  • Experience with matrices, distributions and probability
  • Hands-on experience in Java or Scala.
  • Proficient with relational databases, natural language processing and at least one scripting language – preferably Python/Ruby
  • Have a working knowledge of the big data tech stack including Hadoop, Spark and NoSQL databases like Couchbase, HBase, Solr, etc.
  • Have previous exposure to DevOps, containers like Docker, and cloud environments like AWS, Azure, etc
  • You need to have 5 to 10 years of experience in a similar, relevant role. You need to also have worked in the big data space before, alongside a big data engineering team (data visualization team and data and business analysts)
  • The position is based in Chennai but may require domestic and international travel.