<div dir="ltr">
<div>Just as a reminder, the poster submission deadline of <span class="gmail-aBn" tabindex="0"><span class="gmail-aQJ">August 15</span></span> for our workshop is approaching! Please read the updated poster submission information below, which is also available on the <a href="https://learningfromusersworkshop.github.io/" target="_blank">workshop website</a>. Feel free to distribute this information as you see fit:</div><div><br><br></div>
<div style="font-size:12.8px;text-decoration-style:initial;text-decoration-color:initial"><div style="text-decoration-style:initial;text-decoration-color:initial"><b>IEEE VIS 2018 <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">Workshop</span></span>:</b></div><div style="text-decoration-style:initial;text-decoration-color:initial"><b><span class="gmail-il">MACHINE</span> <span class="gmail-il">LEARNING</span> FROM USER INTERACTION FOR VISUALIZATION AND ANALYTICS</b></div></div><div style="font-size:12.8px;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="text-decoration-style:initial;text-decoration-color:initial"><div><span style="font-size:12.8px"><b>CALL FOR PAPERS</b></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><b>Date and Location</b>: <span class="gmail-m_-3804854769293340498gmail-aBn"><span class="gmail-m_-3804854769293340498gmail-aQJ">October </span></span>22, 2018 in Berlin, Germany</span></div><div><br></div><div><span style="font-size:12.8px"><b style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">Website</b><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">: </span><a href="https://learningfromusersworkshop.github.io/" style="color:rgb(17,85,204);background-color:rgb(255,255,255)" target="_blank">https://learningfromu<wbr>sersworkshop.github.io/</a><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">The goal of this <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">workshop</span></span>
is to bring together researchers from across the VIS community –
SciVis, InfoVis, and VAST – to share their knowledge and build
collaborations at the intersection of the <span class="gmail-il">Machine</span> <span class="gmail-il">Learning</span> and
Visualization fields, with a focus on <span class="gmail-il">learning</span> from user interaction.
Our intention in this <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">workshop</span></span> is to pull
expertise from across all fields of VIS in order to generate open
discussion about how we currently learn from user interaction and where
we can go with future research in this area. We hope to foster
discussion regarding systems, interaction models, and interaction
techniques across fields within the VIS community, rather than the
current state of having these discussions independently contained within
the SciVis/InfoVis/VAST fields. Further, we hope to collaboratively
create a research agenda that explores the future of <span class="gmail-il">machine</span> <span class="gmail-il">learning</span>
with user interaction based on the discussion during the <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">workshop</span></span>.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><b><span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">WORKSHOP</span></span> TOPICS</b></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">The topic of the <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">workshop</span></span>
will focus on issues and opportunities related to the use of <span class="gmail-il">machine</span>
<span class="gmail-il">learning</span> to learn from user interaction in the course of data
visualization and analysis. Specifically, we will focus on research
questions including:</span></div><div><ul><li>How are <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithms currently <span class="gmail-il">learning</span> from user interaction, and what other possibilities exist?<br></li><li>What kinds of interactions can provide feedback to <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithms?<br></li><li>What can <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithms learn from interactions?<br></li><li>Which <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithms are most applicable in this domain?<br></li><li>How can <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithms be designed to enable user interaction and feedback?<br></li><li>How can visualizations and interactions be designed to exploit <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithms?<br></li><li>How can visualization system architectures be designed to support <span class="gmail-il">machine</span> <span class="gmail-il">learning</span>?<br></li><li>How should we manage conflicts between the user’s intent and the data or <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithm capabilities?<br></li><li>How can we evaluate systems that incorporate both <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> algorithms and user interaction together?<br></li><li>How can <span class="gmail-il">machine</span> <span class="gmail-il">learning</span> and user interaction together make both computation and user cognition more efficient?<br></li><li>How can we support the sensemaking process by <span class="gmail-il">learning</span> from user interaction?<br></li></ul></div><div><span style="font-size:12.8px"><b><br></b></span></div><div><span style="font-size:12.8px"><b>SUBMISSIONS</b></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><b>Posters</b></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">We
invite both late-breaking work and contributions in this area from
other research domains to submit extended abstracts between 2 and 4
pages in length (NOT including references). All submissions must be
formatted according to the <a href="http://junctionpublishing.org/vgtc/Tasks/camera.html" target="_blank">VGTC conference style template</a> (i.e., <b>NOT the journal style template that full papers use</b>).
Extended abstracts are to be submitted online through the Precision
Conference System (additional details TBA; do NOT use the PCS link above
to submit extended abstracts for posters). All abstracts accepted for
presentation at the <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">workshop</span></span> will be published on IEEE Xplore and linked from the <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">workshop</span></span>
website. All abstracts should contain full author names and
affiliations. If applicable, a link to a short video (up to 5 min. in
length) may also be submitted. The abstracts will be juried by the
organizers and selected external reviewers and will be chosen according
to relevance, quality, and likelihood that they will stimulate and
contribute to the discussion. At least one author of each accepted
poster needs to register for the conference (even if only for the <span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">workshop</span></span>). Registration information will be available on the <a href="http://ieeevis.org/year/2018/welcome" target="_blank">IEEE VIS website</a>.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Important Dates</span></div><div><ul><li>Submission deadline: <span class="gmail-m_-3804854769293340498gmail-aBn"><span class="gmail-m_-3804854769293340498gmail-aQJ">August 15, 2018</span></span><br></li><li>Author notification: <span class="gmail-m_-3804854769293340498gmail-aBn"><span class="gmail-m_-3804854769293340498gmail-aQJ">September 1, 2018</span></span><br></li><li>Camera-ready deadline: <span class="gmail-m_-3804854769293340498gmail-aBn"><span class="gmail-m_-3804854769293340498gmail-aQJ">October 1, 2018</span></span><br></li><li><span class="gmail-m_-3804854769293340498gmail-il"><span class="gmail-il">Workshop</span></span>: <span class="gmail-m_-3804854769293340498gmail-aBn"><span class="gmail-m_-3804854769293340498gmail-aQJ">October</span></span> 22, 2018<br></li></ul></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><b>ORGANIZERS</b></span></div><div><ul><li>John Wenskovitch, Virginia Tech (<a href="mailto:jw87@vt.edu" target="_blank">jw87@vt.edu</a>)</li><li>Michelle <span class="gmail-m_-3804854769293340498gmail-il">Dowling</span>, Virginia Tech (<a href="mailto:dowlingm@vt.edu" target="_blank">dowlingm@vt.edu</a>)</li><li>Chris North, Virginia Tech</li><li>Remco Chang, Tufts University</li><li>Alex Endert, Georgia Tech</li><li>David Rogers, Los Alamos National Lab</li></ul></div></div>
<br></div>