Machine Learning Proceedings 1993 - Proceedings of the Tenth International Conference on Machine Learning, University of Massachusetts, Amherst, June 27-29, 1993

Machine Learning Proceedings 1993 - Proceedings of the Tenth International Conference on Machine Learning, University of Massachusetts, Amherst, June 27-29, 1993

von: Machine Learning

Elsevier Reference Monographs, 2014

ISBN: 9781483298627 , 361 Seiten

Format: PDF

Kopierschutz: DRM

Windows PC,Mac OSX Apple iPad, Android Tablet PC's

Preis: 54,95 EUR

Mehr zum Inhalt

Machine Learning Proceedings 1993 - Proceedings of the Tenth International Conference on Machine Learning, University of Massachusetts, Amherst, June 27-29, 1993


 

Front Cover

1

Machine Learning

2

Copyright Page

3

Table of Contents

4

PREFACE

8

ORGANIZING COMMITTEE

9

PROGRAM COMMITTEE

9

WORKSHOPS

10

Chapter 1. The Evolution of Genetic Algorithms: Towards Massive Parallelism

14

Abstract

14

1 INTRODUCTION

14

2 TRADITIONAL GAs

14

3 COARSE-GRAIN PARALLEL GAs

15

4 FINE-GRAIN PARALLEL GAs

16

5 FINE VS. COARSE-GRAIN PARALLELISIM

17

6 SUMMARY & FUTURE DIRECTIONS

20

References

21

CHAPTER 2. ÉLÉNA: A BOTTOM-UP LEARNING METHOD

22

ABSTRACT

22

INTRODUCTION

22

1 PRESENTATION OF THE SYSTEM

23

2 THE LEARNING COMPONENT

24

3 EXPERIMENTS

25

4 RELATED WORK

27

CONCLUSION

28

Acknowledgements

28

References

28

Chapter 3. Addressing the Selective Superiority Problem: Automatic Algorithm/Model Class Selection

30

Abstract

30

1 THE PROBLEM OF SELECTIVE SUPERIORITY

30

2 AUTOMATIC ALGORITHM SELECTION

30

3 KNOWLEDGE-BASED SEARCH

31

4 RECURSIVE COMBINATION OF MODEL CLASSES

32

5 MCS: A MODEL CLASS SELECTION SYSTEM

32

6 ILLUSTRATION

34

7 FUTURE WORK

36

8 CONCLUSION

37

Acknowledgments

37

References

37

Chapter 4. Using Decision Trees to Improve Case-Based Learning

38

Abstract

38

1 INTRODUCTION

38

2 LEARNING THE DEFINITION OF UNKNOWN WORDS

39

3 COMPARING THE DECISION TREE, CBL, AND HYBRID APPROACHES

40

4 RELATED WORK AND CONCLUSIONS

43

Acknowledgments

44

References

44

Chapter 5. GALOIS : An order-theoretic approach to conceptual clustering

46

Abstract

46

1 INTRODUCTION

46

2 THE CONCEPT LATTICE: BACKGROUND

47

3 AN ALGORITHM FOR THE INCREMENTAL DETERMINATION OF THE CONCEPT LATTICE

48

4 COMPUTATIONAL COMPLEXITY

50

5 EMPIRICAL EVALUATION OF GALOIS AS A LEARNING SYSTEM

50

6 RELATED WORK

52

7 CONCLUSION AND FUTURE WORK

53

Acknowledgements

53

References

53

Capter 6. Multitask Learning: A Knowledge-Based Source of Inductive Bias

54

Abstract

54

1 INTRODUCTION

54

2 MULTITASK LEARNING AND INDUCTIVE BIAS

54

3 AN EXAMPLE OF MULTITASK CONNECTIONIST LEARNING

55

4 MULTITASK CONNECTIONIST LEARNING IN MORE DETAIL

58

5 MULTITASK DECISION TREES

59

6 RELATED WORK

60

7 SUMMARY

60

Acknowledgements

61

References

61

Chapter 7. Using Qualitative Models to Guide Inductive Learning

62

Abstract

62

1 INTRODUCTION

62

2 CONTEXT & RELATED WORK

63

3 LEARNING METHOD

63

4 EXPERIMENTAL EVALUATION

65

5 DISCUSSION AND CONCLUSION

69

References

69

Chapter 8. Automating Path Analysis for Building Causal Models from Data

70

Abstract

70

1. INTRODUCTION

70

2. BACKGROUND: REGRESSION

70

3· PATH ANALYSIS

71

4. PATH ANALYSIS OF PHOENIX DATA

72

5· AUTOMATIC GENERATION OF PATH MODELS

73

6. EXPERIMENTS

74

7. CONCLUSION

76

APPENDIX: DATA GENERATION

76

Acknowledgments

77

References

77

Chapter 9. Constructing Hidden Variables in Bayesian Networks via Conceptual Clustering

78

Abstract

78

1 INTRODUCTION

78

2 HIDDEN VARIABLES

78

3 LEARNING IN TANTRA

79

4 RESULTS

82

5 RELATED WORK

83

6 DISCUSSION

84

References

85

Chapter 10. Learning Symbolic Rules Using Artificial Neural Networks

86

Abstract

86

1 INTRODUCTION

86

2 EXTRACTING RULES FROM

2 EXTRACTING RULES FROM

87

87

3 EXTENDING NofM WITH SOFT WEIGHT-SHARING

88

4 DATA SETS

89

5 EXPERIMENTAL RESULTS

89

6 CONCLUSIONS

92

ACKNOWLEDGEMENTS

93

REFERENCES

93

Chapter 11. Small Disjuncts in Action: Learning to Diagnose Errors in the Local Loop of the Telephone Network

94

Abstract

94

1 INTRODUCTION

94

2 THE NYNEX MAX DOMAIN

95

3 C4.5 RESULTS

96

4 RL RESULTS

97

5 GENERALITY VS. ACCURACY

98

6 DISCUSSION: DISJUNCT SIZES AND NOISE

99

7 CONCLUSIONS

101

References

101

Chapter 12. Concept Sharing: A Means to Improve Multi-Concept Learning

102

Abstract

102

1 Introduction

102

2 Relational Horn clause learning algorithms

102

3 Multiple concept FOCL

103

4 Evaluation

104

5 Related Work

108

6 Discussion

108

Acknowledgments

109

References

109

Chapter 13. Discovering Dynamics

110

Abstract

110

1 Introduction

110

2 The LAGRANGE Algorithm

111

3 Experimental evaluation

112

4 Related work

114

5 Discussion

114

References

115

Chapter 14. Synthesis of Abstraction Hierarchies for Constraint Satisfaction by Clustering Approximately Equivalent Objects

117

Abstract

117

1 Introduction

117

2 Abstract Search Spaces

118

3 Parameterized CSPs

119

4 Synthesis of Problem Solvers

119

5 Experimental Results

121

6 Future Work

122

7 Related Work

123

8 Summary

123

References

124

Chapter 15. SKICAT: A Machine Learning System for Automated Cataloging of Large Scale Sky Surveys

125

ABSTRACT

125

1. INTRODUCTION

125

2. MACHINE LEARNING BACKGROUND

125

3· CLASSIFYING SKY OBJECTS

127

4. CONCLUSIONS AND FUTURE WORK

131

REFERENCES

132

Chapter 16. Learning From Entailment: An Application to Prepositional Horn Sentences

133

Abstract

133

1 INTRODUCTION

133

2 RELATED WORK

135

3 THE ALGORITHM

136

4 APPLICATION TOAPPROXIMATE ENTAILMENT

138

5 SUMIVIARY AND FUTUREWORK

139

Acknowledgments

140

References

140

Chapter 17. Efficient Domain-Independent Experimentation

141

Abstract

141

1 Introduction

141

2 Learning by Experimentation

142

3 Domain-independent Heuristics for Efficient Experimentation

143

4 Results

144

5 Conclusion

145

Acknowledgments

146

References

147

Chapter 18. Learning Search Control Knowledge for Deep Space Network Scheduling

148

Abstract

148

1 INTRODUCTION

148

2 COMPOSER

149

3 THE DEEP SPACE NETWORK

149

4 EXPERIMENT AND RESULTS

152

5 DISCUSSION

153

Acknowledgements

154

References

154

Chapter 19. Learning procedures from interactive natural language instructions

156

Abstract

156

1 INTRODUCTION

156

2 RELATED WORK

157

3 INSTRUCTION WITHIN AN AUTONOMOUS AGENT

157

4 LEARNING FROM INSTRUCTION

158

5 EXAMPLE

159

6 RESULTS

161

7 CONCLUSION

162

References

162

Chapter 20. Generalization under Implication by Recursive Anti-unification

164

Abstract

164

1 INTRODUCTION

164

2 PRELIMINARIES

165

3 GENERALIZATION BY RECURSIVE ANTI-UNIFICATION

166

4 RELATED WORK

170

5 CONCLUDING REMARKS

170

References

171

Chapter 21. Supervised learning and divide-and-conquer: A statistical approach

172

Abstract

172

1 INTRODUCTION

172

2 HIERARCHICAL MIXTURES OF EXPERTS

173

3 CONCLUSIONS

178

4 APPENDIX

179

Acknowledgements

179

References

179

Chapter 22. Hierarchical Learning in Stochastic Domains: Preliminary Results

180

Abstract

180

1 INTRODUCTION

180

2 Q AND DG LEARNING

180

3 LANDMARK NETWORKS

182

4 HDG LEARNING ALGORITHM

183

5 PRELIIMINARY EXFERIIMENTAL RESULT S

184

6 RELATED WORK

185

7 FUTURE WORK

185

References

186

Chapter 23. Constraining Learning with Search Control

187

Abstract

187

1 Introduction

187

2 Decisions Based on Lack of Knowledge

189

3 Experimental Results

190

4 Summary and Discussion

193

Acknowledgments

193

References

193

Chapter 24. Scaling Up Reinforcement Learning for Robot Control

195

Abstract

195

1 Introduction

195

2 The Learning Algorithm

195

3 The Domain: A Mobile Robot Simulator

196

4 A Docking Task and Teaching

197

5 Hierarchical Learning

198

6 Hidden State

200

Acknowledgements

202

References

202

Chapter 25. Overcoming Incomplete Perception with Utile Distinction Memory

203

Abstract

203

1 INTRODUCTION

203

2 UTILITY-BASED DISTINCTIONS FOR MEMORY

204

3 DETAILS OF THE ALGORITHM

204

4 EXPERIMENTAL RESULTS

207

5 CONCLUSIONS

207

References

208

Chapter 26. Explanation Based Learning: A Comparison of Symbolic and Neural Network Approaches

210

Abstract

210

1 Introduction

210

2 An Overview of EBNN

210

3 Correspondence between Symbolic and Neural Network EBL

213

4 Summary and Conclusions

216

Acknowledgments

217

References

217

Chapter 27. Combinatorial optimizationin in ductive concept learning

218

Abstract

218

1 INTRODUCTION

218

2 PROBLEM DEFINITION

219

3 COMBINATORIAL OPTIMIZATION ALGORITHMS USED FOR RULE INDUCTION

219

4 ATRIS: A SHELL FOR RULEINDUCTION

220

5 EXPERIMENTS AND RESULTS

221

6 CONCLUSION AND FURTHER WORK

223

Acknowledgements

223

References

223

Chapter 28. Decision Theoretic Subsampling for Induction on Large Databases

225

Abstract

225

1 INTRODUCTION

225

2 OVERVIEW

226

3 INFORMATION CONTENT DISTRIBUTIONS

227

4 EXPECTED LOSS

227

5 SAMPLING STRATEGY

228

6 EVALUATION

229

7 CONCLUSION

231

Acknowledgements

232

References

232

Chapter 29. Learning DNF Via Probabilistic Evidence Combination

233

Abstract

233

1 INTRODUCTION

233

2 LEARNING CONJUNCTIONS AS INCREMENTAL PROBABILISTIC EVIDENCE COMBINATION

234

3 EXAMPLES OF NOISE MODELS

235

4 LEARNING DNF FROM NOISY DATA

236

5 EXPERIMENTAL RESULTS

237

6 FUTURE WORK

239

7 SUMMARY

239

References

240

Chapter 30. Explaining and Generalizing Diagnostic Decisions

241

Abstract

241

1 EXPLAINING AND GENERALIZING DECISIONS

241

2 EMPIRICAL EVALUATION

243

3 ORDER OF MAGNITUDE REASONING

245

4 RELATED WORK

247

5 CONCLUSION

247

Acknowledgements

248

References

248

Chapter 31. Combining Instance-Based and Model-Based Learning

249

Abstract

249

1 INTRODUCTION

249

2 USING MODELS AND INSTANCES

249

3 EMPIRICAL EVALUATION

250

4 CONCLUSION

255

Acknowledgements

255

References

255

Chapter 32. Data Mining of Subjective Agricultural Data

257

Abstract

257

1 INTRODUCTION

257

2 OVERVIEW

258

3 STATISTICAL PROCESSING OF THE NTEP DATA

258

4 INITIAL STUDY: PREDICTING CULTIVAR PERFORMANCE

259

5 LEARNING MODELS FROM THE NTEP DATA

260

6 CONCLUSIONS

263

References

263

Chapter 33. Lookahead Feature Construction for Learning Hard Concepts

265

Abstract

265

1 Introduction

265

2 The LFC Algorithm

266

3 Empirical Results

268

4 Discussion and Related Work

270

5 Conclusion

271

Acknowledgments

272

References

272

Chapter 34. Adaptive Neuro Controi: How Black Box and Simple can it be

273

Abstract

273

1 INTRODUCTION

273

2 FROM NARENDRA'S APPROACH TO JORDAN'S APPROACH

274

3 THREE POSSIBLE EXTENSIONS OF JORDAN'S METHOD

276

4 COMPARISON OF THE FIVE METHODS

277

5 CONCLUSIONS

278

References

278

Chapter 35. An SE-tree based Characterization of the Induction Problem

281

Abstract

281

1 INTRODUCTION

281

2 A THEORY FOR INDUCTION

281

3 A LEARNING ALGORITHM

283

4 CLASSIFICATION ALGORITHMS

284

5 BIAS IN THE LEARNING PHASE

285

6 SE-TREE AND DECISION TREES

286

7 CONCLUSION AND FUTURE RESEARCH DIRECTIONS

287

Acknowledgements

288

References

288

Chapter 36. Density-Adaptive Learning and Forgetting

289

Abstract

289

1 Introduction and Motivation

289

2 Learning Algorithm

290

3 Density-Adaptive Forgetting

292

4 Conclusion and Future Extensions

294

Acknowledgements

296

References

296

Chapter 37. Efficiently Inducing Determinations: A Complete and Systematic Search Algorithm that Uses Optimal Pruning

297

Abstract

297

1 INTRODUCTION

297

2 RELATED WORK

297

3 VERIFYING A DETERMINATION

298

4 SEARCHING FOR DETERMINATIONS

300

5 CONCLUSIONS

303

Acknowledgements

303

References

303

Chapter 38. Compiling Bayesian Networks into Neural Networks

304

Abstract

304

1 Introduction

304

2 Bayesian Propagation Network Definition

305

3 Backpropagation

306

4 Representing Distributions

307

5 Empirical Evaluation of Generalization

308

6 Related Work

309

7 Conclusion

309

Acknowledgments

310

References

310

Chapter 39. A Reinforcement Learning Method for Maximizing Undiscounted Rewards

311

Abstract

311

1 Introduction

311

2 Background

311

3 IVf easures of Performance

312

4 The Connection Between Discounted andUndiscounted Value

314

5 Learning T-Optimal Policies

314

6 Advantages of R-Leaming

315

7 Experimental Results

317

8 Related Work

317

9 Conclusion

318

Acknowledgements

318

References

318

Chapter 40. ATM Scheduling with Queuing Delay Predictions

319

Abstract

319

Introduction

319

ATM Networking

319

On-Line Dynamic Programming

320

Experimental Evaluation

323

Simulations

324

Conclusions

325

Acknowledgements

325

References

326

Chapter 41. Online Learning with Random Representations

327

Abstract

327

1 Online Learning

327

2 Learning with Expanded Representations

328

3 A Basic RR Network

329

4 Performance vs Representation Size

330

5 Unsupervised Learning

331

6 Many Irrelevant Inputs

332

7 RR V S Backpropagation

332

8 Conclusions

333

Acknowledgments

334

References

334

Chapter 42. Learning from Queries and Examples with Tree-structured Bias

335

Abstract

335

1 Introduction

335

2 Tree-structured Bias

336

3 The PAC Learning Framework

336

4 The Learning Algorithm

337

5 Experimental Results

340

6 Discussion and Related Work

341

7 Conclusions and Future Work

342

Acknowledgments

342

References

342

Chapter 43. Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents

343

Abstract

343

1 INTRODUCTION

343

2 RELATED WORK

344

3 REINFORCEMENT LEARNING

344

4 TASK DESCRIPTION

345

5 CASE 1: SHARING SENSATION

345

6 CASE 2: SHARING POLICIES OR EPISODES

346

7 CASE 3: ON JOINT TASKS

348

8 CONCLUSIONS AND FUTURE WORK

349

Acknowledgments

350

References

350

Chapter 44. Better Learners Use Analogical Problem Solving Sparingly

351

Abstract

351

1 WHEN TO ANALOGIZE

351

2 GAP FILLING

352

3 AVOIDING ANALOGY

353

4 USING ANALOGY SPARINGLY

355

5 DISCUSSION

356

Acknowledgements

358

References

358

AUTHOR INDEX

359

SUBJECT INDEX

360