go - what's the difference between decodeRuneInternal and decodeRuneInStringInternal -
in golang's std package, "func decoderuneinternal" , "func decoderuneinstringinternal" same except args, is:
func decoderuneinternal(p []byte) (r rune, size int, short bool) func decoderuneinstringinternal(s string) (r rune, size int, short bool)
why not define decoderuneinstringinternal as:
func decoderuneinstringinternal(s string) (r rune, size int, short bool) { return decoderuneinternal([]byte(s)) (r rune, size int, short bool) }
in utf8.go, decoderuneinstringinternal's implementations same decoderuneinternal.
why?
the 2 functions avoid memory allocation in conversion []byte(s)
in case string function wraps []byte function or memory allocation in conversion string(p)
in case []byte function wraps string function.
Comments
Post a Comment