TY - GEN
T1 - Randomized greedy search for structured prediction
T2 - 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
AU - Ma, Chao
AU - Rezaur Rahman Chowdhury, F. A.
AU - Deshwal, Aryan
AU - Islam, Md Rakibul
AU - Doppa, Janardhan Rao
AU - Roth, Dan
N1 - Publisher Copyright:
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved.
PY - 2019
Y1 - 2019
N2 - In a structured prediction problem, we need to learn a predictor that can produce a structured output given a structured input (e.g., part-of-speech tagging). The key learning and inference challenge is due to the exponential size of the structured output space. This paper makes four contributions towards the goal of a computationally-efficient inference and training approach for structured prediction that allows to employ complex models and to optimize for non-decomposable loss functions. First, we define a simple class of randomized greedy search (RGS) based inference procedures that leverage classification algorithms for simple outputs. Second, we develop a RGS specific learning approach for amortized inference that can quickly produce high-quality outputs for a given set of structured inputs. Third, we plug our amortized RGS inference solver inside the inner loop of parameter-learning algorithms (e.g., structured SVM) to improve the speed of training. Fourth, we perform extensive experiments on diverse structured prediction tasks. Results show that our proposed approach is competitive or better than many state-of-the-art approaches in spite of its simplicity.
AB - In a structured prediction problem, we need to learn a predictor that can produce a structured output given a structured input (e.g., part-of-speech tagging). The key learning and inference challenge is due to the exponential size of the structured output space. This paper makes four contributions towards the goal of a computationally-efficient inference and training approach for structured prediction that allows to employ complex models and to optimize for non-decomposable loss functions. First, we define a simple class of randomized greedy search (RGS) based inference procedures that leverage classification algorithms for simple outputs. Second, we develop a RGS specific learning approach for amortized inference that can quickly produce high-quality outputs for a given set of structured inputs. Third, we plug our amortized RGS inference solver inside the inner loop of parameter-learning algorithms (e.g., structured SVM) to improve the speed of training. Fourth, we perform extensive experiments on diverse structured prediction tasks. Results show that our proposed approach is competitive or better than many state-of-the-art approaches in spite of its simplicity.
UR - http://www.scopus.com/inward/record.url?scp=85074918224&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074918224&partnerID=8YFLogxK
U2 - 10.24963/ijcai.2019/713
DO - 10.24963/ijcai.2019/713
M3 - Conference contribution
AN - SCOPUS:85074918224
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 5130
EP - 5138
BT - Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
A2 - Kraus, Sarit
PB - International Joint Conferences on Artificial Intelligence
Y2 - 10 August 2019 through 16 August 2019
ER -