Perhaps the most common way of integrating R and Hadoop is to use HDFS to store all data long-term. MapReduce (submitted from Hive or Pig) can be used to sample data sets into R. From there you can use all the great tools R provides on a sample from your Hadoop database.
That being said, R and Hadoop generally do not integrate very well. What makes R valuable is its amazing library of primitives for statistics and visualizations. R libraries are non-distributed, usually operating on only residing in memory. Hence, if you are finding an R method slow on a particular dataset, there is really no way to make it run faster with Hadoop.