![]() According to Le, research on the structures of topological graphs has produced powerful algorithmic techniques over the past two decades, but current techniques are reaching their limits. Topological graphs have what Le calls “nice structures” with few or no edge crossings, where links between vertices in the graph are forced to intersect. Le will use his award of award of $655,466 to advance the theoretical understanding of topological graphs, which appear in many practical applications, including logistics and planning, very large-scale integration design, image processing, and robot navigation. At the same time, we want to support visualization creators to design visualizations that elicit calibrated trust.” “Ideally, we want human readers to engage in calibrated trust when interacting with data visualizations, which involves critically evaluating the information, rather than unconditionally dismissing or accepting it. “The visualization community does not yet have a systematic understanding of factors that impact trust in visualization design nor a formalized model of how trust is measured and established between humans and data,” explains Bearfield. Effective design decisions can lead to powerful and intuitive processing by a visualization reader, but poor choices can leave the key patterns misunderstood, stymie critical thinking with data and leave the visualization reader vulnerable to biases and misinformation.įor Bearfield, this raises important questions about how we can design visualizations to encourage critical thinking and afford trust. However, many decisions go into designing a visualization, from choosing the visual styles of a chart to what background information to provide. Bearfield, whose award totals $631,846, is working to develop a formalized model to measure trust in human-data interaction and enhance critical thinking between humans and data in visual data communications.ĭata visualizations leverage the strength of our visual perceptual system to process information to help us communicate information more efficiently. Experiments show that DORL not only captures user interests well but also alleviates the Matthew effect. This leads to the main technical contribution of the work: Debiased model-based Offline RL (DORL) method. ![]() It inspires us to add a penalty term to relax the pessimism on states with high entropy of the logging policy and indirectly penalizes actions leading to less diverse states. Through theoretical analyses, we find that the conservatism of existing methods fails in pursuing users' long-term satisfaction. In this paper, we aim to alleviate the Matthew effect in offline RL-based recommendation. It is a notorious issue that needs to be addressed in practical recommender systems. However, when applying such offline RL to recommendation, it will cause a severe Matthew effect, i.e., the rich get richer and the poor get poorer, by promoting popular items or categories while suppressing the less popular ones. To address it, existing methods employ conservatism, e.g., by constraining the learned policy to be close to behavior policies or punishing the rarely visited state-action pairs. ![]() ![]() Offline RL faces the value overestimation problem. Offline reinforcement learning (RL), a technology that offline learns a policy from logged data without the need to interact with online environments, has become a favorable choice in decision-making processes like interactive recommendation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |