I'll be giving an evening talk at the New York Predictive Analytics World, http://www.predictiveanalyticsworld.com/newyork/2011/. The rough plan:
This talk will touch upon topics in data analysis, statistics, and computing relating to modern massive data challenges. How do classical theories in statistical inference and asymptotics translate into statistical practice in the modern world? What role should complex Bayesian procedures and other cutting-edge methodologies have in the data analyst toolkit? Computationally, how can we manage the data deluge and how is statistical software evolving? What are the implications for the data analyst? What are the dangers posed by
addressing these very questions? I'll suggest possible answers to some of these questions, and hope to spur further debate by posing others.