COLT '89 - Proceedings of the Second Annual Workshop, UC Santa Cruz, California, July 31 - August 2 1989

COLT '89 - Proceedings of the Second Annual Workshop, UC Santa Cruz, California, July 31 - August 2 1989

von: COLT

Elsevier Reference Monographs, 2014

ISBN: 9780080948294 , 397 Seiten

Format: PDF

Kopierschutz: DRM

Windows PC,Mac OSX Apple iPad, Android Tablet PC's

Preis: 24,95 EUR

Mehr zum Inhalt

COLT '89 - Proceedings of the Second Annual Workshop, UC Santa Cruz, California, July 31 - August 2 1989


 

Front Cover

1

Proceedings of the Second Annual Workshop on Computational Learning Theory

2

Copyright Page

3

Table of Contents

4

Foreword

6

Part 1: Invited Lecture

8

Chapter 1. Inductive Principles of the Search for Empirical Dependences

10

§1. Introduction

10

§2. The problem of expected risk minimization

11

§3. The principle of empirical risk minimization

13

§4. The concept of consistency and strong consistency

15

§5. Strong consistency and uniform convergence

16

§6. Necessary and sufficient conditions of uniform convergence

17

§7. The relation to the theory of falsifiability by K. Popper

19

§8. The capacity of a set of functions

20

§9. Theorems about the rate of uniform convergence

22

§10. The principle of structural risk minimization

24

§11. Concluding remarks

27

REFERENCES

28

Part 2: Technical Papers

30

Chapter 2. Polynomial Learnability of Semilinear Sets

32

Abstract

32

1 Introduction

32

2 Results and Significance

33

3 Learnability Models Used

34

4 Classes of Concepts Considered

36

5 Technical Details

37

6 Open Problems

46

Acknowledgments

46

References

46

Chapter 3. LEARNING NESTED DIFFERENCES OF INTERSECTION-CLOSED CONCEPT CLASSES

48

ABSTRACT

48

1 Introduction

48

2 The Inclusion-Exclusion Algorithms

53

3 Conclusions and Open Problems

61

4 Acknowledgements

62

References

62

Chapter 4. A Polynomial-time Algorithm for Learningfc-variable Pattern Languages from Examples

64

1 Introduction

64

2 Definitions and Notation

67

3 The Algorithm COVER

68

4 Good Things and Bad Things

69

5 The Event Tree

71

6 Putting it All Together

74

7 Conclusions and Future Research

76

References

77

Chapter 5. ON LEARNING FROM EXERCISES

79

ABSTRACT

79

1. INTRODUCTION

79

2. LEARNING FROM SOLVED INSTANCES

80

3. AN APPLICATION

85

4. LEARNING FROM EXERCISES

86

5. CONCLUSION

92

Acknowledgements

93

References

93

Appendix A

93

Chapter 6. On Approximate Truth

95

Abstract

95

1 Introduction

95

2 Approximate Truth

96

3 Some deductive logic of approximate truth

99

4 Some inductive logic of approximate truth

102

5 Stable predicates

106

6 Concluding remarks

107

7 References

108

Chapter 7. Informed parsimonious inference of prototypical genetic sequences

109

Abstract

109

1 Introduction

109

2 Model of sequence generation

110

3 Bayes model

112

4 Expressing the inductive assumptions

113

5 Computing the optimal theory

117

6 Experimental results

118

7 Comparison to the biological parsimony methods

121

8 Acknowledgements

122

References

122

Chapter 8. COMPLEXITY ISSUES IN LEARNING BY NEURAL NETS

125

Abstract

125

1 INTRODUCTION

125

2 DEFINITIONS

126

3 NEURAL NET DESIGN PROBLEMS

128

4 TRAINING NEURAL NETS

132

5 CASCADE NEURAL NETS

134

6 CONCLUSIONS

138

REFERENCES

139

Chapter 9. Equivalence Queries and Approximate Fingerprints

141

Abstract

141

1 Introduction

141

2 The basic idea

142

3 Representations of concepts

142

4 Some examples of approximate fingerprints

145

5 Comments

150

6 Acknowledgments

151

References

151

Chapter 10. LEARNING READ-ONCE FORMULAS USING MEMBERSHIP QUERIES

153

ABSTRACT

153

1. INTRODUCTION

153

2. LEARNING EQUIVALENT READ-ONCE FORMULAS FROM EXPLICITLY GIVEN FORMULAS

155

3. PRELIMINARIES

156

4. LEARNING READ-ONCE FORMULAS WITH A RELEVANT POSSIBILITY ORACLE

157

5. LEARNING MONOTONE READ-ONCE FORMULAS USING MEMBERSHIP QUERIES

158

6. LEARNING READ-ONCE FORMULAS USING A PROJECTIVE EQUIVALENCE ORACLE

165

7. EXTENSIONS AND OPEN PROBLEMS

167

8. SUMMARY

167

References

168

Chapter 11. LEARNING SIMPLE DETERMINISTIC LANGUAGES

169

ABSTRACT

169

INTRODUCTION

169

PRELIMINARIES

170

CONTEXT-FREE GRAMMARS AND LANGUAGES

170

SDG AND SDL

171

MODELS AND INCORRECTNESS/CORRECTNESS

172

TYPES OF QUERIES

173

THE LEARNING ALGORITHM

173

AN OUTLINE OF THE ALGORITHM

174

GENERATING NONTERMINALS WITH THEIR MODELS

175

CORRECTNESS AND COMPLEXITY

177

CONCLUSION

179

Acknowledgements

180

References

180

Chapter 12. Learning in the Presence of Inaccurate Information

182

ABSTRACT

182

INTRODUCTION

182

NOTATION

183

IDENTIFICATION WITH INACCURATE INPUT

183

HIERARCHY RESULTS

184

RELATIVE EFFECTS OF DIFFERENT TYPES OF INACCURACIES

188

LANGUAGE LEARNING

192

CONCLUDING REMARKS

194

Acknowledgements

194

References

194

Chapter 13. Convergence to Nearly Minimal Size Grammars by Vacillating Learning Machines

196

ABSTRACT

196

Preliminaries

197

Language Learning by Vacillating Machines

199

Convergence to Nearly Minimal Size Grammars

200

A Characterization of TxtMfexab Criteria

201

Comparison of Learning with and without Size Restrictions

203

References

204

Chapter 14. Inductive Inference with Bounded Number of Mind Changes

207

ABSTRACT

207

OVERVIEW

207

NOTATIONS, DEFINITIOS AND PRIOR RESULTS

208

INFERENCE WITH BOUNDED NUMBER OF MIND CHANGES

211

CONCLUSIONS AND OPEN PROBLEMS

219

ACKNOWLEDGEMENTS

219

REFERENCES

219

Chapter 15. LEARNING VIA QUERIES TO AN ORACLE

221

ABSTRACT

221

1. INTRODUCTION

221

2. NOTATION AND DEFINITIONS

223

3. INFERRING REC

227

4. WHEN ORACLES DO NOT HELP

227

5. WHEN ORACLES DO HELP

229

6. BOUNDING MIND CHANGES

231

7. OPEN QUESTIONS AND THE DEGREE OF INFERABILITY

234

REFERENCES

235

Chapter 16. LEARNING STRUCTURE FROM DATA: A SURVEY

237

ABSTRACT

237

1. INTRODUCTION

237

2. IDENTIFYING TREES

238

3. IDENTIFYING POLYTREES

240

4. IDENTIFYING TREES WITH HIDDEN VARIABLES

242

5. IDENTIFYING TREE STRUCTURES IN CATEGORICAL RELATIONS

243

6. IDENTIFIABILITY VS. LEARNABILITY

247

EXAMPLES

247

CONCLUSIONS

249

Acknowledgements

249

References

250

Chapter 17. A Statistical Approach to Learning and Generalization in Layered Neural Networks

252

Abstract

252

1. Introduction

252

2. Probability inference in the network space

255

3. Predictions and generalization

258

4. Asymptotic learning curves in the self-averaging case

262

Acknowledgment

264

REFERENCE

265

Chapter 18. THE LIGHT BULB PROBLEM

268

ABSTRACT

268

INTRODUCTION

268

A QUADRATIC-TIME ALGORITHM

269

ALGORITHM A

270

ALGORITHM B

271

BOOTSTRAP TECHNIQUE

272

k–WAY CORRELATION

273

k–WAY CORRELATION FOR ARBITRARY k

274

Acknowledgments

275

References

275

Chapter 19. From On-line to Batch Learning

276

Abstract

276

1 Introduction

276

2 Description of Models and Terminology

277

3 Converting from On-line to Batch

279

4 Chernoff Bounds for Supermartingales

280

5 The Conversion

285

6 Comparisons

289

References

290

Chapter 20. A PARAMETRIZATION SCHEME FOR CLASSIFYING MODELS OF LEARNABILITY

292

Abstract

292

1. INTRODUCTION

292

2. BASIC DEFINITIONS

295

3. THE UNIFORMITY PARAMETER

296

4. SOLID LEARNABILITY

298

5. COVERING RELATIONS BETWEEN CONCEPT CLASSES

303

Acknowledgements

307

REFERENCES

308

Chapter 21. On the Role of Search for Learning

310

Abstract

310

1 Introduction

310

2 Preliminaries

312

3 A More Powerful Enumeration Technique

313

4 Barzdins' Conjecture

315

5 Implications for Systems that Learn

316

6 Acknowledgments

317

References

317

Chapter 22. ELEMENTARY FORMAL SYSTEM AS A UNIFYING FRAMEWORK FOR LANGUAGE LEARNING

319

ABSTRACT

319

1 INTRODUCTION

320

2 PRELIMINARIES

320

3 EFS AS A LOGIC PROGRAMMING LANGUAGE

323

4 THE CLASSES OF EFS LANGUAGES

327

5 INDUCTIVE INFERENCE OF EFS LANGUAGES

330

6 CONCLUSION

333

References

333

Chapter 23. IDENTIFICATION OF UNIONS OF LANGUAGES DRAWN FROM AN IDENTIFIABLE CLASS

335

ABSTRACT

335

INTRODUCTION

335

BASIC CONCEPTS

336

NEW RESULTS

338

Chapter 24. INDUCTION FROM THE GENERALTO THE MORE GENERAL

341

ABSTRACT

341

1. INTRODUCTION

341

2. FORMAL PRELIMINARIES

344

3. RESULTS

347

4. CONCLUSION

353

Acknowledgments

354

References

354

Chapter 25. SPACE-BOUNDED LEARNING AND THE VAPNIK-CHERVONENKIS DIMENSION

356

ABSTRACT

356

INTRODUCTION

356

THE DATA COMPRESSION SCHEME

359

AN ALGORITHM FOR THE COMPRESSION FUNCTION

361

SIMPLE PLANAR ARRANGEMENTS

361

MAXIMUM CLASSES OF VC DIMENSION TWO

364

APPLICATIONS

366

SIMPLE ARRANGEMENTS

367

POSITIVE HALF-SPACES

367

VECTOR SPACES OF REAL FUNCTIONS

368

CONCLUSIONS AND OPEN PROBLEMS

370

Acknowledgements

371

References

371

Chapter 26. Reliable and Useful Learning

372

Abstract

372

1 Introduction

372

2 Reliable and probably useful learning

374

3 Basic results

377

4 Learnability of boolean concepts

379

5 Learnability of rectangles

380

6 General conditions for learnability

382

7 Summary and conclusions

386

Acknowledgements

386

References

386

Part 3: Short Abstracts

388

Chapter 27. The Strength of Weak Learnability

390

Chapter 28. ON THE COMPLEXITY OF LEARNING FROM COUNTEREXAMPLES

391

ABSTRACT

391

Chapter 29. Generalizing the PAC Model: Sample Size Bounds From Metric Dimension-based Uniform Convergence Results

392

Chapter 30. A Theory of Learning Simple Concepts Under Simple Distributions

393

ABSTRACT

393

Chapter 31. Learning Binary Relations and Total Orders

394

Chapter 32. The Weighted Majority Algorithm

395

Abstract

395

Author Index

396