广西师范大学学报(自然科学版) ›› 2019, Vol. 37 ›› Issue (1): 62-70.doi: 10.16088/j.issn.1001-6600.2019.01.007

• 第二十四届全国信息检索学术会议专栏 • 上一篇    下一篇

基于损失函数融合的组排序学习方法

林原1, 刘海峰2, 林鸿飞2, 许侃2*   

  1. 1.大连理工大学人文与社会科学学部,辽宁大连116024;
    2.大连理工大学电子信息与电气工程学部,辽宁大连116024
  • 收稿日期:2018-09-27 出版日期:2019-01-20 发布日期:2019-01-08
  • 通讯作者: 许侃(1981—),男,辽宁大连人,大连理工大学高级工程师,博士。E-mail: xukan@dlut.edu.cn
  • 基金资助:
    国家自然科学基金(61602078,61572102,61402075);教育部人文社会科学研究基金(16YJCZH128)

Group Ranking Methods with Loss Function Incorporation

LIN Yuan1, LIU Haifeng2, LIN Hongfei2, XU Kan2*   

  1. 1.Faculty of Humanities and Social Sciences, Dalian University of Technology, Dalian Liaoning 116024,China;
    2.Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian Liaoning 116024,China
  • Received:2018-09-27 Online:2019-01-20 Published:2019-01-08

摘要: 排序学习已经在信息检索和机器学习领域中获得了广泛的关注,一系列的排序学习理论主要是基于3种排序样本构造方法提出的,分别是:单文档方法(pointwise)、文档对方法(pairwise)、文档列表法(listwise)。特别地,文档列表法中的组排序的方法可以有效地提高排序的性能。将这些方法与损失函数相结合来提高组排序的性能,基本思想是融合不同的损失函数来扩充基于神经网络方法的损失函数。首先,本文提出了一种基于J氏距离(Jeffrey’s divergence)的组样本损失函数的构造方法;然后,基于该组排序的方法,提出了一种与其他损失函数进行融合框架,在LETOR3.0数据集上比较了所提出方法的性能;最后,实验结果表明所提出的加权损失函数融合方法能够有效地提高查询的相关性排序结果。

关键词: 排序学习, 信息检索, 神经网络, 损失函数, J氏距离

Abstract: Learning to rank has been attracted much attention in the domain of information retrieval and machine learning. A series of learning to rank algorithms have been proposed based on three types of methods, namely, pointwise, pairwise and listwise. Especially, ranking performance can be improved effectively by one of the listwise methods named group ranking. This paper explores how to combine the loss functions from these methods to improve group ranking performance. The basic idea is to incorporate the different loss functions and enrich the objective loss function based on neural networks. Firstly, a group learning to rank method based on Jeffrey’s divergence is presented. Secondly, a framework for loss function incorporation based on group ranking method and the other loss function is presented. The performance of the proposed method is compared on LETOR3.0 dataset, which demonstrates that with a good weighting scheme. Finally, experimental results show that the proposed method significantly outperforms the baselines which use single loss function, and it is comparable to the state-of-the-art algorithms in most cases.

Key words: learning to rank, information retrieval, neural network, loss function, Jeffrey’s divergence

中图分类号: 

  • TP391
[1] PAGE L,BRIN S,MOTWANI R,et al.The PageRank citation ranking:bringing order to the web[R/OL].Stanford, CA:Stanford Info Lab,1999[2018-09-27].http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf.
[2] KLEINBERG J M. Authoritative sources in a hyperlinked environment[J].Journal of the ACM,1999,46(5): 604-632.DOI:10.1145/324133.324140.
[3] ROBERTSON S E.Overview of the okapi projects[J].Journal of Documentation,1997,53(1):3-7.DOI: 10.1108/EUM0000000007186.
[4] ZHAI Chengxiang.Statistical language models for information retrieval[J].Foundations and Trends in Information Retrieval,2008,2(3):137-213.DOI:10.1561/1500000008.
[5] MAGNANT C,GRIVEL E,GIREMUS A,et al.Jeffrey’s divergence for state-space model comparison[J].Signal Processing,2015,114:61-74.DOI:10.1016/j.sigpro.2015.02.006.
[6] LIU Tieyan.Learning to rank for information retrieval[J].Foundations and Trends in Information Retrieval,2009,3(3):225-331.DOI:10.1561/1500000016.
[7] COSSOCK D,Zhang Tong.Subset ranking using regression[C]//Proceedings of the 19th Annual Conference on Learning Theory.Berlin:Springer,2006:605-619.DOI:10.1007/11776420_44.
[8] CRAMMER K,SINGER Y.Pranking with ranking[C]//Advances in neural information processing systems: Proceedings of the First 12 Conferences.Cambridge,MA:MIT Press,2002:641-647.
[9] FUHR N.Optimum polynomial retrieval functions based on the probability ranking principle[J].ACM Transactions on Information Systems,1989,7(3):183-204.DOI:10.1145/65943.65944.
[10] NALLAPATI R.Discriminative models for information retrieval[C]//Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.New York,NY:ACM Press,2004:64-71.DOI:10.1145/1008992.1009006.
[11] JOACHIMST.Optimizing search engines using clickthrough data[C]//Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York,NY:ACM Press,2002:133-142. DOI:10.1145/775047.775067.
[12] FREUND Y,IYER R,SCHAPIRE R E,et al.An efficient boosting algorithm for combining preferences[J]. Journal of Machine Learning Research,2003,4:933-969.
[13] BURGES C,SHAKED T,RENSHAW E,et al.Learning to rank using gradient descent[C]//Proceedings of the 22nd International Conference on Machine Learning.New York,NY:ACM Press,2005:89-96.DOI:10.1145/1102351. 1102363.
[14] TAYLOR M,GUIVER J,ROBERTSON S,et al.Softrank:optimizing non-smooth rank metrics[C]//Proceedings of the 2008 International Conference on Web Search and Data Mining.New York,NY:ACM Press,2008:77-86.DOI: 10.1145/1341531.1341544.
[15] XU Jun,LIU Tieyan,LU Min,et al.Directly optimizing evaluation measures in learning to rank[C]//Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.New York,NY:ACM Press,2008:107-114.DOI:10.1145/1390334.1390355.
[16] CHAKRABARTI S,KHANNA R,SAWANT U,et al.Structured learning for non-smooth ranking losses[C]//Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York,NY:ACM Press,2008:88-96.DOI:10.1145/1401890.1401906.
[17] XU Jun,LIU Tieyan,LU Min,et al.Directly optimizing evaluation measures in learning to rank[C]//Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.New York,NY:ACM Press,2008:107-114.DOI:10.1145/1390334.1390355.
[18] CAO Zhe,QIN Tao,LIU Tieyan,et al.Learning to rank: from pairwise approach to listwise approach[C]//Proceedings of the 24th International Conference on Machine Learning.New York,NY:ACM Press,2007:129- 136.DOI:10.1145/1273496.1273513.
[19] XIA Fen, LIU Tieyan, WANG Jue, et al. Listwise approach to learning to rank: theory and algorithm[C]//Proceedings of the 25th International Conference on Machine Learning. New York, NY:ACM Press, 2008: 1192-1199.DOI:10.1145/1390156.1390306.
[20] QIN Tao,ZHANG Xudong,TSAI M F,et al.Query-level loss functions for information retrieval[J]. Information Processing and Management,2008,44(2):838-855.DOI:10.1016/j.ipm.2007.07.016.
[21] LIN Yuan,LIN Hongfei,Xu KAN,et al.Group-enhanced ranking[J].Neurocomputing,2015,150:99-105.DOI: 10.1016/j.neucom.2014.03.079.
[22] WU Mingrui,CHANG Yi,ZHENG Zhaohui,et al.Smoothing DCG for learning to rank: a novel approach using smoothed hinge functions[C]//Proceedings of the 18th ACM Conference on Information and Knowledge Management.New York,NY:ACM Press,2009:1923-1926.DOI:10.1145/1645953.1646266.
[23] MOON T,SMOLA A,CHANG Yi,et al.IntervalRank:isotonic regression with listwise and pairwise constraints [C]//Proceedings of the Third ACM International Conference on Web Search and Data Mining.New York,NY: ACM Press,2010:151-160.DOI:10.1145/1718487.1718507.
[24] AMOS T.Intransitivity of preferences[J].Psychological Review,1969,76(1):31-48.DOI:10.1037/h0026750.
[25] QIN Tao,LIU Tieyan,XU Jun,et al.LETOR: a benchmark collection for research on learning to rank for information retrieval[J].Information Retrieval,2010,13(4):346-374.DOI:10.1007/s10791-009-9123-y.
[1] 马玲, 罗晓曙, 蒋品群. 一种基于PNN的点阵喷码字符识别方法[J]. 广西师范大学学报(自然科学版), 2020, 38(4): 32-41.
[2] 李雅岱, 韦笃取. 含磁场耦合忆阻神经网络放电行为研究[J]. 广西师范大学学报(自然科学版), 2020, 38(3): 19-24.
[3] 葛奕飞, 郑彦斌. 带有纠删或纠错性质的隐私保护信息检索方案[J]. 广西师范大学学报(自然科学版), 2020, 38(3): 33-44.
[4] 严浩, 许洪波, 沈英汉, 程学旗. 开放式中文事件检测研究[J]. 广西师范大学学报(自然科学版), 2020, 38(2): 64-71.
[5] 刘欣, 罗晓曙, 赵书林. 基于BP神经网络的三轴增稳云台自抗扰控制[J]. 广西师范大学学报(自然科学版), 2020, 38(2): 115-120.
[6] 罗兰, 周楠, 司杰. 不确定细胞神经网络鲁棒稳定新的时滞划分法[J]. 广西师范大学学报(自然科学版), 2019, 37(4): 45-52.
[7] 范瑞,蒋品群,曾上游,夏海英,廖志贤,李鹏. 多尺度并行融合的轻量级卷积神经网络设计[J]. 广西师范大学学报(自然科学版), 2019, 37(3): 50-59.
[8] 许伦辉, 陈凯勋. 基于改进萤火虫算法优化BP神经网络的路网速度分布预测[J]. 广西师范大学学报(自然科学版), 2019, 37(2): 1-8.
[9] 张金磊, 罗玉玲, 付强. 基于门控循环单元神经网络的金融时间序列预测[J]. 广西师范大学学报(自然科学版), 2019, 37(2): 82-89.
[10] 黄丽明, 陈维政, 闫宏飞, 陈翀. 基于循环神经网络和深度学习的股票预测方法[J]. 广西师范大学学报(自然科学版), 2019, 37(1): 13-22.
[11] 武文雅, 陈钰枫, 徐金安, 张玉洁. 基于高层语义注意力机制的中文实体关系抽取[J]. 广西师范大学学报(自然科学版), 2019, 37(1): 32-41.
[12] 王祺, 邱家辉, 阮彤, 高大启, 高炬. 基于循环胶囊网络的临床语义关系识别研究[J]. 广西师范大学学报(自然科学版), 2019, 37(1): 80-88.
[13] 薛洋,曾庆科,夏海英,王文涛. 基于卷积神经网络超分辨率重建的遥感图像融合[J]. 广西师范大学学报(自然科学版), 2018, 36(2): 33-41.
[14] 万雷,罗玉玲,黄星月. 脉冲神经网络硬件系统性能监测平台[J]. 广西师范大学学报(自然科学版), 2018, 36(1): 9-16.
[15] 钟海鑫, 丘森辉, 罗晓曙, 唐堂, 杨力, 赵帅. 基于附加惯性项BP神经网络的四旋翼无人机姿态控制研究[J]. 广西师范大学学报(自然科学版), 2017, 35(2): 24-31.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!
版权所有 © 广西师范大学学报(自然科学版)编辑部
地址:广西桂林市三里店育才路15号 邮编:541004
电话:0773-5857325 E-mail: gxsdzkb@mailbox.gxnu.edu.cn
本系统由北京玛格泰克科技发展有限公司设计开发