Go Language Resources Go, golang, go... NOTE: This page ceased updating in October, 2012

--- Log opened Sat May 22 00:00:58 2010
00:07 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit [Quit:
skelterjohn]
00:15 -!- homiziado_ [~ernestofr@62.169.125.54.rev.optimus.pt] has quit [Quit:
homiziado_]
00:16 -!- homiziado [~ernestofr@62.169.125.54.rev.optimus.pt] has joined #go-nuts
00:16 -!- homiziado [~ernestofr@62.169.125.54.rev.optimus.pt] has left #go-nuts []
00:16 -!- marsu [~marsu@226.65.202-77.rev.gaoland.net] has joined #go-nuts
00:28 -!- tedster [~tedster@cpe-067-023-152-198.dhcp.wadsnet.com] has joined
#go-nuts
00:30 -!- carllerche [~carllerch@enginey-9.border1.sfo002.pnap.net] has quit
[Quit: carllerche]
00:32 -!- glen__ [~4624e39f@gateway/web/freenode/x-zovzvfuzhnpsxogt] has joined
#go-nuts
00:34 -!- marsu [~marsu@226.65.202-77.rev.gaoland.net] has quit [Quit: Leaving]
00:42 -!- glen__ [~4624e39f@gateway/web/freenode/x-zovzvfuzhnpsxogt] has quit
[Quit: Page closed]
00:43 < plexdev> http://is.gd/cjJfW by [Christopher Wedgwood] in
go/src/pkg/net/ -- net: implement raw sockets
00:43 < plexdev> http://is.gd/cjJg1 by [Devon H. O'Dell] in go/src/cmd/cgo/
-- cgo: better error for no C symbols
00:46 -!- shardz [samuel@ilo.staticfree.info] has joined #go-nuts
00:51 -!- Xera^ [~brit@87-194-208-246.bethere.co.uk] has quit [Read error:
Connection reset by peer]
00:51 -!- bmizerany [~bmizerany@dsl081-064-072.sfo1.dsl.speakeasy.net] has quit
[Remote host closed the connection]
00:52 -!- meatmanek [~meatmanek@c-76-21-205-249.hsd1.va.comcast.net] has quit
[Quit: Leaving]
00:54 -!- meatmanek [~meatmanek@c-76-21-205-249.hsd1.va.comcast.net] has joined
#go-nuts
00:55 -!- meatmanek [~meatmanek@c-76-21-205-249.hsd1.va.comcast.net] has left
#go-nuts []
01:06 -!- BrowserUk [~irc1_20_B@92.15.81.219] has joined #go-nuts
01:07 < BrowserUk> Hi, anyone around know anything about the concurrency
internals of go?
01:08 -!- BrowserUk [~irc1_20_B@92.15.81.219] has left #go-nuts []
01:12 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
01:15 -!- ikke [~ikke@201-66-201-169.smace700.dsl.brasiltelecom.net.br] has joined
#go-nuts
01:15 -!- ikke [~ikke@201-66-201-169.smace700.dsl.brasiltelecom.net.br] has quit
[Changing host]
01:15 -!- ikke [~ikke@unaffiliated/ikkebr] has joined #go-nuts
01:16 -!- yashi [~yashi@dns1.atmark-techno.com] has quit [Ping timeout: 248
seconds]
01:17 -!- TR2N [email@89.180.237.1] has quit [Ping timeout: 240 seconds]
01:18 -!- ikkebr [~ikke@unaffiliated/ikkebr] has quit [Ping timeout: 258 seconds]
01:19 -!- g0bl1n [~anonymous@a213-22-77-195.cpe.netcabo.pt] has quit [Ping
timeout: 248 seconds]
01:20 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit [Quit:
skelterjohn]
01:21 -!- TR2N [email@89.180.237.1] has joined #go-nuts
01:24 -!- TR2N` [email@89-180-237-1.net.novis.pt] has joined #go-nuts
01:24 -!- TR2N [email@89.180.237.1] has quit [Disconnected by services]
01:30 -!- Rondell [~JoLeClodo@vian.wallinfire.net] has quit [Ping timeout: 265
seconds]
01:30 -!- Rondell [~JoLeClodo@vian.wallinfire.net] has joined #go-nuts
01:30 < plexdev> http://is.gd/cjMgi by [Russ Cox] in go/src/cmd/cgo/ -- roll
back 1193046 - fix build
01:31 -!- alexbobp [~alex@66.112.249.162] has quit [Remote host closed the
connection]
01:39 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Read
error: Connection reset by peer]
01:40 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
01:40 -!- Eridius [~kevin@unaffiliated/eridius] has quit [Ping timeout: 252
seconds]
01:43 -!- gisikw [~gisikw@137.28.246.34] has joined #go-nuts
01:49 -!- alehorst [~alehorst@189.26.56.41.dynamic.adsl.gvt.net.br] has quit [Ping
timeout: 248 seconds]
01:50 -!- alehorst [~alehorst@189.26.56.41.dynamic.adsl.gvt.net.br] has joined
#go-nuts
01:52 -!- g0bl1n [~anonymous@a213-22-77-195.cpe.netcabo.pt] has joined #go-nuts
01:56 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Read
error: Connection timed out]
02:04 -!- MizardX [~MizardX@unaffiliated/mizardx] has quit [Read error: Connection
reset by peer]
02:06 -!- MizardX [~MizardX@unaffiliated/mizardx] has joined #go-nuts
02:12 -!- gisikw [~gisikw@137.28.246.34] has quit [Remote host closed the
connection]
02:19 -!- rhelmer [~rhelmer@adsl-69-107-74-234.dsl.pltn13.pacbell.net] has joined
#go-nuts
02:35 -!- Venom_X [~pjacobs@71.20.102.220] has joined #go-nuts
02:53 -!- Leon9 [~leon@189.224.119.97] has joined #go-nuts
02:57 < exch> has part of the API reverted in the past week?  The Api of the
compress/gzip package was changed about a week ago, but now it's back to the
original
03:10 < MizardX> Last change May 7
03:10 -!- Ginto8 [~Ginto8@pool-72-82-235-34.cmdnnj.fios.verizon.net] has quit
[Ping timeout: 245 seconds]
03:11 < exch> mm weird
03:12 -!- yashi [~yashi@dns1.atmark-techno.com] has joined #go-nuts
03:13 < MizardX> There's a difference between latest release version, and
svn though.  NewDeflater -> NewWriter etc..
03:13 < MizardX> err..  hg, not svn
03:13 < exch> yea, but both times I got the source the same way.  (hg pull
-u)
03:14 < exch> the pull i did a few hours ago reverted the api back to the
old one (gzip.Deflater etc)
03:21 < exch> mm another fresh pull solved it.
03:27 -!- Venom_X [~pjacobs@71.20.102.220] has quit [Quit: Venom_X]
03:34 < plexdev> http://is.gd/cjTHX by [Robert Griesemer] in 2 subdirs of
go/ -- test/hilbert.go: convert to test case and benchmark for big.Rat
03:34 < plexdev> http://is.gd/cjTI0 by [Robert Griesemer] in 7 subdirs of
go/src/pkg/ -- go/printer, gofmt: fix printing of labels,
03:45 -!- slashus2 [~slashus2@74-137-24-74.dhcp.insightbb.com] has quit [Quit:
slashus2]
03:47 < wrtp> is there a way to tell all.bash to re-make all your
goinstalled packages too?
03:49 -!- scarabx [~scarabx@c-76-19-43-200.hsd1.ma.comcast.net] has joined
#go-nuts
03:51 -!- tux21b [~christoph@90.146.60.30] has quit [Ping timeout: 240 seconds]
03:55 -!- tumdum [~tumdum@unaffiliated/tumdum] has joined #go-nuts
03:58 -!- tumdum_ [~tumdum@atx177.neoplus.adsl.tpnet.pl] has joined #go-nuts
03:58 -!- tumdum_ [~tumdum@atx177.neoplus.adsl.tpnet.pl] has quit [Changing host]
03:58 -!- tumdum_ [~tumdum@unaffiliated/tumdum] has joined #go-nuts
03:58 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
04:00 -!- tumdum [~tumdum@unaffiliated/tumdum] has quit [Ping timeout: 264
seconds]
04:01 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Read
error: Connection reset by peer]
04:02 -!- mfoemmel [~mfoemmel@chml01.drwholdings.com] has quit [Ping timeout: 248
seconds]
04:02 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
04:02 -!- meatmanek [~meatmanek@c-76-21-205-249.hsd1.va.comcast.net] has joined
#go-nuts
04:16 -!- mfoemmel [~mfoemmel@chml01.drwholdings.com] has joined #go-nuts
04:17 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Ping
timeout: 240 seconds]
04:18 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
04:20 -!- tumdum [~tumdum@unaffiliated/tumdum] has quit [Quit: tumdum]
04:27 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Ping
timeout: 260 seconds]
04:31 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
04:41 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Read
error: Connection reset by peer]
04:42 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
04:46 -!- Ideal [~Ideal@ideal-1-pt.tunnel.tserv6.fra1.ipv6.he.net] has joined
#go-nuts
04:52 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Read
error: Connection reset by peer]
04:53 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
04:59 -!- scm [justme@d134153.adsl.hansenet.de] has quit [Read error: Operation
timed out]
05:01 -!- illya77 [~illya77@206-214-133-95.pool.ukrtel.net] has joined #go-nuts
05:03 -!- scm [justme@80.171.71.228] has joined #go-nuts
05:15 -!- path[l] [UPP@120.138.102.34] has quit [Quit: path[l]]
05:26 -!- eikenberry [~jae@mail.zhar.net] has quit [Ping timeout: 265 seconds]
05:26 -!- path[l] [UPP@120.138.102.34] has joined #go-nuts
05:26 -!- path[l] [UPP@120.138.102.34] has quit [Client Quit]
05:29 -!- path[l] [UPP@120.138.102.34] has joined #go-nuts
05:33 -!- ShadowIce [pyoro@unaffiliated/shadowice-x841044] has joined #go-nuts
05:42 -!- ikke [~ikke@unaffiliated/ikkebr] has quit []
05:43 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has quit [Quit:
Leaving]
05:51 -!- alexbobp [~alex@adsl-75-34-101-206.dsl.austtx.sbcglobal.net] has joined
#go-nuts
05:59 -!- adu [~ajr@pool-173-66-252-179.washdc.fios.verizon.net] has joined
#go-nuts
06:23 -!- Agon-laptop
[~marcel@HSI-KBW-095-208-003-128.hsi5.kabel-badenwuerttemberg.de] has joined
#go-nuts
06:24 -!- Chryson [~Chryson@c-71-61-11-114.hsd1.pa.comcast.net] has quit [Quit:
Leaving]
06:26 -!- ako [~nya@f051194071.adsl.alicedsl.de] has joined #go-nuts
06:27 -!- aho [~nya@f051155084.adsl.alicedsl.de] has quit [Ping timeout: 265
seconds]
06:33 -!- scarabx [~scarabx@c-76-19-43-200.hsd1.ma.comcast.net] has quit [Quit:
This computer has gone to sleep]
06:33 -!- slashus2 [~slashus2@74-137-24-74.dhcp.insightbb.com] has joined #go-nuts
06:54 -!- Ideal [~Ideal@ideal-1-pt.tunnel.tserv6.fra1.ipv6.he.net] has quit [Quit:
Ideal]
07:04 -!- slashus2 [~slashus2@74-137-24-74.dhcp.insightbb.com] has quit [Quit:
slashus2]
07:33 -!- MizardX [~MizardX@unaffiliated/mizardx] has quit [Ping timeout: 276
seconds]
07:34 -!- vrtical [rm445@pip.srcf.societies.cam.ac.uk] has quit [Ping timeout: 260
seconds]
07:34 -!- vrtical [rm445@pip.srcf.societies.cam.ac.uk] has joined #go-nuts
07:42 -!- rlab [~Miranda@91.200.158.34] has joined #go-nuts
07:49 -!- Agon-laptop
[~marcel@HSI-KBW-095-208-003-128.hsi5.kabel-badenwuerttemberg.de] has quit [Remote
host closed the connection]
07:53 -!- xenplex [~xenplex@195.46.241.226] has joined #go-nuts
08:03 -!- zeitgeist [~chatzilla@74.86.0.138] has quit [Ping timeout: 245 seconds]
08:09 -!- rhelmer [~rhelmer@adsl-69-107-74-234.dsl.pltn13.pacbell.net] has quit
[Ping timeout: 252 seconds]
08:13 -!- fhs [~fhs@pool-71-167-84-226.nycmny.east.verizon.net] has joined
#go-nuts
08:14 -!- rhelmer [~rhelmer@adsl-69-107-89-5.dsl.pltn13.pacbell.net] has joined
#go-nuts
08:21 < taruti> Is it possible to change the port where http is listening
while running a program?
08:24 < taruti> hmm, http.Serve + closing the net.Listener might work
08:30 -!- TR2N [email@89-180-237-1.net.novis.pt] has left #go-nuts []
08:35 -!- noam__ [~noam@77.127.205.252] has quit [Read error: Connection reset by
peer]
08:35 -!- noam__ [~noam@77.127.205.252] has joined #go-nuts
08:41 -!- alexbobp [~alex@adsl-75-34-101-206.dsl.austtx.sbcglobal.net] has quit
[Ping timeout: 260 seconds]
08:43 -!- alexbobp [~alex@adsl-75-34-101-206.dsl.austtx.sbcglobal.net] has joined
#go-nuts
08:48 < madari> hmm what does this mean: looking at the process I see that
its resident size is 650M, but when looking at the heap profile (alloc_space) it
says Total MB: 12.5
08:49 < madari> and is it normal that the GC is practically doing nothing
until you are almost OOM
08:52 -!- napsy [~luka@88.200.96.14] has joined #go-nuts
08:52 -!- fuzzybyt1 [~fuzzybyte@a47.org] has left #go-nuts []
08:52 -!- fuzzybyte [~fuzzybyte@a47.org] has joined #go-nuts
09:01 < jessta> madari: the GC doesn't release memory back to the OS
09:02 < madari> uh ok
09:04 < madari> but how come it lets the program eat practically all the
memory available and then start recycling..  or atleast that's what I think is
happening?
09:04 < jessta> the GC isn't great at the moment, it's being rewritten
09:04 < jessta> but it's not a priority because it can be fixed later
09:05 < madari> ok
09:05 < madari> thank you
09:07 < jessta> madari: hmmm...but the program should still be able to ruse
memory that has been GC'd
09:08 < jessta> *reuse
09:08 -!- zeitgeist [~chatzilla@222.73.189.44] has joined #go-nuts
09:08 < jessta> so maybe you have a leak
09:09 < madari> I'm sure I do :D
09:10 < madari> but still...  it's kind of a weird thing (for me) that the
recycling start working efficiently when I'm running OOM.  So efficiently that I
never end up swapping
09:14 -!- Agon-laptop
[~marcel@HSI-KBW-095-208-003-128.hsi5.kabel-badenwuerttemberg.de] has joined
#go-nuts
09:16 -!- holmescn [~2052BBD8C@221.192.238.9] has joined #go-nuts
09:17 -!- holmescn [~2052BBD8C@221.192.238.9] has quit [Remote host closed the
connection]
09:23 -!- kel__ [~kel@cpc2-leat2-0-0-cust98.hers.cable.ntl.com] has joined
#go-nuts
09:25 -!- marsu [~marsu@16.109.202-77.rev.gaoland.net] has joined #go-nuts
09:25 -!- Ideal [~Ideal@ideal-1-pt.tunnel.tserv6.fra1.ipv6.he.net] has joined
#go-nuts
10:21 -!- Shyde [~shyde@HSI-KBW-078-043-070-132.hsi4.kabel-badenwuerttemberg.de]
has joined #go-nuts
10:43 -!- Ideal [~Ideal@ideal-1-pt.tunnel.tserv6.fra1.ipv6.he.net] has quit [Quit:
Ideal]
11:11 -!- hcatlin [~hcatlin@pdpc/supporter/professional/hcatlin] has joined
#go-nuts
11:21 -!- mitsuhiko [~mitsuhiko@ubuntu/member/mitsuhiko] has quit [Excess Flood]
11:23 -!- marchdown [~marchdown@178.178.119.100] has joined #go-nuts
11:23 -!- marchdown [~marchdown@178.178.119.100] has quit [Excess Flood]
11:24 -!- mitsuhiko [~mitsuhiko@ubuntu/member/mitsuhiko] has joined #go-nuts
11:25 -!- marchdown [~marchdown@178.178.119.100] has joined #go-nuts
11:25 -!- marchdown [~marchdown@178.178.119.100] has quit [Excess Flood]
11:26 -!- General1337 [~support@71-84-50-230.dhcp.mtpk.ca.charter.com] has joined
#go-nuts
11:26 -!- marchdown [~marchdown@178.178.119.100] has joined #go-nuts
11:27 -!- mertimor [~mertimor@p4FE752C4.dip.t-dialin.net] has joined #go-nuts
11:29 -!- General13372 [~support@71-84-50-230.dhcp.mtpk.ca.charter.com] has quit
[Ping timeout: 240 seconds]
11:40 -!- marchdown [~marchdown@178.178.119.100] has quit [Quit: marchdown]
11:48 -!- Xera^ [~brit@87-194-208-246.bethere.co.uk] has joined #go-nuts
11:57 -!- illya77 [~illya77@206-214-133-95.pool.ukrtel.net] has quit [Read error:
Connection reset by peer]
12:09 -!- Agon-laptop
[~marcel@HSI-KBW-095-208-003-128.hsi5.kabel-badenwuerttemberg.de] has quit [Remote
host closed the connection]
12:22 -!- sladegen [~nemo@unaffiliated/sladegen] has quit [Disconnected by
services]
12:22 -!- sladegen [~nemo@unaffiliated/sladegen] has joined #go-nuts
12:22 < emiel_> just trying to understand some basics here :), but why does
go not allow for a one-line block without the {}, allowing for instance one-line
if statements?
12:29 < jessta> emiel_: because that would require a special case in the
parser, and there is nothing gained by it
12:30 -!- Shyde [~shyde@HSI-KBW-078-043-070-132.hsi4.kabel-badenwuerttemberg.de]
has quit [Quit: Shyde]
12:30 < emiel_> hm, okay :) i am probably just spoiled by all the other
languages ;)
12:30 < emiel_> the {} look really weird for one-liners
12:31 < taruti> not really if one is used to that
12:32 < emiel_> yea, it's probably just what-one's-used-to
12:32 < emiel_> will adapt :)!
12:34 < taruti> also "if a:=1; b c" would look quite weird
12:34 < taruti> as compared to if a:=1; b { c }
12:35 -!- hcatlin [~hcatlin@pdpc/supporter/professional/hcatlin] has quit [Quit:
hcatlin]
12:42 -!- Svarthandske [~nn@dsl-tkubrasgw1-fe3cdc00-28.dhcp.inet.fi] has joined
#go-nuts
12:42 -!- BrowserUk [~irc1_20_B@92.15.81.219] has joined #go-nuts
12:48 -!- ikaros [~ikaros@f051118043.adsl.alicedsl.de] has joined #go-nuts
12:53 -!- adu [~ajr@pool-173-66-252-179.washdc.fios.verizon.net] has quit [Quit:
adu]
12:57 -!- Project_2501 [~Marvin@82.84.74.54] has joined #go-nuts
12:57 * Project_2501 appears o.o
13:15 -!- rlab_ [~Miranda@91.200.158.34] has joined #go-nuts
13:16 -!- rlab [~Miranda@91.200.158.34] has quit [Ping timeout: 240 seconds]
13:21 -!- Gracenotes [~person@wikipedia/Gracenotes] has quit [Ping timeout: 245
seconds]
13:27 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
13:46 -!- SRabbelier [~SRabbelie@ip138-114-211-87.adsl2.static.versatel.nl] has
quit [Read error: Connection reset by peer]
13:48 < emiel_> to what degree is simply concatenating strings efficient?
there seems to be no string buffer of some sort?
13:50 < taruti> emiel_: there is bytes.Buffer
13:52 < smw_> emiel_: concatenation means allocating enough space for both
strings and copying them
13:52 < smw_> that is pretty inefficient if it is something you will be
doing alot
13:52 -!- SRabbelier [~SRabbelie@ip138-114-211-87.adsl2.static.versatel.nl] has
joined #go-nuts
13:53 -!- ShadowIce` [pyoro@unaffiliated/shadowice-x841044] has joined #go-nuts
13:54 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit [Quit:
skelterjohn]
13:55 -!- ShadowIce [pyoro@unaffiliated/shadowice-x841044] has quit [Ping timeout:
240 seconds]
13:56 < taruti> another possiblity is StringVector
13:57 -!- marchdown [~marchdown@91.79.37.140] has joined #go-nuts
14:00 < BrowserUk> Has anyone seen any documentation/talks/conceptual
overview of go's concurrency stuff?
14:00 < jessta> I think prety much all the talks mention it
14:01 < jessta> BrowserUk: what kind of information are you looking for?
14:01 < Namegduf> Is the infrastructure in place to actually restart
programs using exec() to have them reload themselves?
14:02 < Namegduf> Because otherwise, you can't use "restart to reload
things" as a solution for anything which doesn't want to dump all its network
connections, unless there's other ways to do that.
14:02 < BrowserUk> I mean the implementation behind.  It seems they use
mixture of coroutines and kernel threads, but I don't see much about how they
decide whether a particular goroutine runs as a coroutine or a thread.
14:02 < Namegduf> BrowserUk: No
14:03 < Namegduf> BrowserUk: They don't "use coroutines"
14:03 < Namegduf> And goroutines aren't ever "a thread"
14:03 < Namegduf> It uses multiple threads, and schedules goroutines across
them.
14:03 < Namegduf> Each goroutine has a separate stack.
14:03 < Namegduf> It uses segmented stacks to avoid them being overly long.
14:05 < BrowserUk> So each thread runs a custom go routine scheduler?
14:05 < Namegduf> There's a scheduler, yes.
14:05 < BrowserUk> *a* scheduler or one per kernel thread?
14:05 < Namegduf> I don't know.
14:05 < Namegduf> Doesn't seem important, though.
14:05 -!- Soultaker [~maks@hell.student.utwente.nl] has left #go-nuts []
14:05 -!- Soultaker [~maks@hell.student.utwente.nl] has joined #go-nuts
14:06 < Soultaker> is there a shorter way to write a function like
min/max/abs than the obvious insanely verbose version with if/then/else?
14:06 < BrowserUk> I'm trying to understand how closures work across
kthreads.
14:06 < Namegduf> Why wouldn't they?
14:07 < Namegduf> Closures are only different from functions in that they
retain access to local variables of the containing scope
14:07 < Namegduf> And threads run in the same memory space
14:07 < Soultaker> maybe it helps to consider that variables accessed by
closures are allocated on the heap
14:07 < Soultaker> (usually local variables are allocated on the stack, but
stacks are thread-local)
14:07 < BrowserUk> Yes, but the closed over locals are on the originating
threasd stack?
14:07 < Soultaker> (or go-routine-local in Go)
14:07 < Namegduf> No.
14:07 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
14:07 < Namegduf> Closed-over variables are on the heap.
14:08 < Soultaker> BrowserUk: no, that was my point.  these are all on the
heap.
14:08 < Soultaker> except Go hides that from you :)
14:08 < Namegduf> Well
14:08 < BrowserUk> Okay...so locals are allocated on the stack, but lifted
to the heap if the are later closed over?
14:08 < Soultaker> (it's probably best to consider ALL variables to exist on
the heap; the fact that a data stack exists at all is just an implementation
detail really)
14:08 < Namegduf> I don't think Go considers whether the implementation puts
things on the heap or stack as "important".
14:09 < Namegduf> BrowserUk: No.
14:09 < Namegduf> BrowserUk: They're allocated on the heap if they're closed
over.
14:09 < Namegduf> They're allocated on the stack otherwise.
14:09 < Soultaker> well, the key is that if you return from a function, the
variables from its activation are not gone if they are still being referenced
(e.g.  through a closure)
14:09 < Namegduf> The compiler knows whether variables are closed over at
compile time.
14:09 < Namegduf> I don't think Go considers whether the implementation puts
things on the heap or stack as "important" to be simple
14:10 < Namegduf> It actually isn't that complex, though.
14:10 < Namegduf> If you get an address of it, or close over it, it's on the
heap, otherwise not.
14:10 < Namegduf> I think.
14:12 < jessta> BrowserUk: goroutines are multiplexed on to real threads, so
a thread may have multiple goroutines allocated to it by the scheduler, the
scheduler may move goroutines between threads too
14:14 < Namegduf> Hm.
14:14 < BrowserUk> Hm. Interesting...is "the scheduler" a part of all
threads?  (As opposed to a separate schedular per thread.)
14:14 < Namegduf> You could do horrible things and stub out main.main and
have all real functionality happen from goroutines spawned from init functions.
14:14 < Namegduf> That'd be amusing.
14:16 * BrowserUk trying to see how goroutines would move between threads.  Ie.
Why would the move?  What would move them?  And why?
14:16 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit [Quit:
skelterjohn]
14:17 < jessta> if a thread is blocked by a syscall, it may be useful to
move some of the goroutines from that thread to another thread so they can run
14:18 < Namegduf> BrowserUk: There's a single memory space.  What's the
difference between a "single scheduler" and a scheduler per thread?
14:18 < Namegduf> They can all call into the same scheduling stuff.
14:19 < BrowserUk> That's a good why.  But how?  The thread with the
blocking call won't reschedule (by the kernel) until it unblocks.  So the
list/queue/whatever of the runnable goroutines must exist outside of the thread.
14:20 < Namegduf> Things don't "exist within" any individual thread
14:20 < Namegduf> They share the same memory
14:20 < Namegduf> This is the distinction between threads and processes
14:20 < BrowserUk> For a goroutine to get move threads, the scheduler would
have to take control when a new thread gets a slice; or recive control during the
running of that slice.
14:21 < BrowserUk> Namegduf: Obviously.
14:23 < jessta> BrowserUk: making a syscall could yield to the scheduler
before the call is made to the OS
14:23 < BrowserUk> What I'm trying to get at is that threads resume from
where they left of.  When does the go scheduler get a look in?
14:24 -!- ikaros [~ikaros@f051118043.adsl.alicedsl.de] has quit [Remote host
closed the connection]
14:25 < jessta> the go scheduler gets a look in when a goroutines yeilds
14:25 < jessta> *yields
14:25 < BrowserUk> jessta: "yield"?  Doesn' that imply "coroutines"?
14:25 < Namegduf> No
14:25 < jessta> yes, pretty much
14:25 < Namegduf> Well
14:26 < Namegduf> Nothing beyond what a goroutine is
14:26 < jessta> it's co-operative multi-tasking
14:26 < Namegduf> Goroutines function like coroutines in that they yield,
yes
14:26 < BrowserUk> That's where I came in.
14:26 < jessta> goroutines are coroutines multiplexed over real threads
14:26 < Soultaker> I guess it depends on your definition of what a coroutine
is.
14:27 -!- eikenberry [~jae@mail.zhar.net] has joined #go-nuts
14:27 < Soultaker> in my opinion threads with a channel are not coroutines
14:27 < Soultaker> but if you have them (like in Go) you don't need
coroutines anymore
14:27 -!- scarabx [~scarabx@c-76-19-43-200.hsd1.ma.comcast.net] has joined
#go-nuts
14:27 < Soultaker> coroutines, to me, suggest a language construct that
allows for passing control back and forth between two routines.
14:28 < jessta> Soultaker: eg.  goto
14:28 < Soultaker> in Go, that doesn't happen explicitly, but only
implicitly (if you have two goroutines and an unbuffered channel where one writes
and the other reads, you have pretty much the same thing)
14:28 < BrowserUk> BrowserUk: " It seems they use mixture of coroutines and
kernel threads" -- Namegduf: "No" "They don't "use coroutines"...
14:28 < Soultaker> not at all like goto ;)
14:28 < emiel_> sorry to interrupt, guys, but i saw your discussion on
heap/stack allocation, and must say i do not really get it, where, for instance,
is "return &Buffer{buf: buf}" (bytes.NewBuffer) allocated?
14:28 < Namegduf> BrowserUk: Right.  They don't "use [a] mixture"
14:29 < Soultaker> emiel_: on the heap.
14:29 -!- noam__ [~noam@77.127.205.252] has quit [Read error: Connection reset by
peer]
14:29 < Soultaker> in Java, it would read "return new Buffer(buf)"
14:29 < emiel_> Soultaker: based on what?  on the return statement?
14:29 < Namegduf> They have at least some attributes of one, and could be
considered to BE, not to USE, that one, and they use the other by being scheduled
across it.
14:29 < jessta> Soultaker: ok, more like setjump
14:29 -!- noam__ [~noam@77.127.205.252] has joined #go-nuts
14:29 < Soultaker> on the fact that you return a pointer to an object.
14:29 < jessta> Soultaker: but only because goto was nerfed
14:29 < BrowserUk> Sorry Namegduf: But if you have "coroutines" running
inside "kernel threads", how is that not a mixture?
14:30 -!- mizipzor [~mizipzor@c213-89-173-222.bredband.comhem.se] has joined
#go-nuts
14:30 < Namegduf> Well, partly because being something and using something
are distinct.
14:30 < Soultaker> jessta: I think setjump is a lot less powerful
14:31 < Soultaker> but I guess you could simulate a coroutine with setjmp
and a lot of discipline.
14:31 < jessta> Soultaker: I guess, goroutines do have their own stacks
14:31 < BrowserUk> I'm guessing that it woudl be based around
set/getcontext?
14:31 < jessta> coroutines tend not to
14:31 < Namegduf> Secondly, a mixture generally implies some combination of
them, rather than just both existing
14:31 < Namegduf> i.e.  mixing
14:31 < emiel_> Soultaker: right, so when it would have read a := &Buffer{};
return &Buffer{someRef:a}, then stack-allocated a would go out of scope?
14:31 < Soultaker> jessta: yes, for one thing.  with longjmp, if one the
calling functions returns, all the jump buffers in functions above are
invalidated.
14:32 < BrowserUk> Hm. I think you're dissembling :)
14:32 < Soultaker> emiel_: I'm not sure what you mean.  In that example, a
is a pointer value on the stack, but it points to a Buffer struct in the heap.
14:32 < BrowserUk> phine &
14:32 < Namegduf> That seems unlikely
14:32 < emiel_> aha, okay
14:32 < BrowserUk> phone even &
14:33 < Namegduf> Go has only a few ways to derive a bad pointer, according
to stuff I've read
14:33 < Namegduf> And that was not one of them.
14:33 -!- aho [~nya@f051194071.adsl.alicedsl.de] has quit [Quit:
EXEC_over.METHOD_SUBLIMATION]
14:34 < emiel_> so then, structs are never allocated on the stack :)?
14:34 < Namegduf> Why would you think that?
14:34 < jessta> emiel_: only struct that escape
14:34 < jessta> *escape their scope
14:35 < Namegduf> I have a thought that taking the address of something
forces it to be on the heap
14:35 < Namegduf> But that seems wrong
14:35 < Namegduf> I'm not sure.
14:35 < jessta> ummm...only strcuts that escape their scope aren't allocated
to the stack
14:35 < Namegduf> jessta: How is that determined?
14:35 < jessta> Namegduf: you're right
14:36 < Namegduf> Okay, that works.
14:36 < emiel_> ah
14:36 < emiel_> it does, apparently, thanks :)!
14:36 < jessta> Namegduf: but that's just the easy way and sometimes it's
unnesscessary
14:36 < Namegduf> Yeah.
14:37 < jessta> better escape analysis will mean less stuff unnesscessarly
put on the heap
14:37 < Namegduf> If it's never passed anywhere or everywhere it's passed
doesn't store the pointer for later and it isn't returned (harder to determine
even in simple cases)
14:37 < Namegduf> It could be stack allocated as an optimisation.
14:37 -!- hcatlin [~hcatlin@pdpc/supporter/professional/hcatlin] has joined
#go-nuts
14:37 < emiel_> yes, but also calling functions with the ref (deeper on the
stack) might allow for stack allocation instead of heap
14:38 < jessta> but yeah, just an optimisation
14:45 -!- ikke [~ikke@unaffiliated/ikkebr] has joined #go-nuts
14:50 < BrowserUk> jessta: If goroutines are cooperatively dispatched within
a thread,: new thread( &firstGR ); only goroutines started within firstGR can ever
get control within that thread.  If an instance of the scheduler gets control
first: new thread( &sched, &firstGR ); recheduling of a GR to another thread is
only possible when the scheduler instance gets control back--which with
cooperative might never happen.  Unless you also have timer or hardware based
14:50 < BrowserUk> interupts.
14:51 * BrowserUk just trying to reason about how the coroutines and threads
interact.
14:53 < jessta> BrowserUk: every yield is a yield to the scheduler
14:53 * emiel_ likes gofmt, all crappy ;'s gone :)
14:53 < Soultaker> when a goroutine is started, any OS thread can run them
14:54 < Soultaker> but it's true that if all threads are occupied by
goroutines that don't relinquish control then other goroutines are deadlocked
14:54 -!- rv2733 [~rv2733@c-98-242-168-49.hsd1.fl.comcast.net] has joined #go-nuts
14:54 < Soultaker> that's also quite annoying in practice
14:55 < jessta> deadlocks are a bug
14:55 < Soultaker> I'm not sure if Go does its internal scheduling on OS
stuff too
14:55 < Soultaker> (e.g.  if a goroutine blocks on a read call, does it
release the thread?)
14:55 < jessta> yes
14:57 < Soultaker> ok, that's nice.
14:57 < jessta> at least it should
14:58 < jessta> I don't think the scheduler is that great at the moment
14:58 < Soultaker> well, I can imagine it could.  but I don't know if it's
implemented.
14:58 < jessta> since it tends to make things slower the more threads you
have
14:58 < Soultaker> it definitely requires a lot more OS-specific work in the
scheduler
14:58 -!- mertimor [~mertimor@p4FE752C4.dip.t-dialin.net] has quit [Quit:
mertimor]
14:58 < BrowserUk> jessta: If you have 4 cores and 4 go threads, and each
thread starts a cpu-bound process, there are no yields.  Not a deadlock, just
"everyones busy, call back later".
14:59 -!- mertimor [~mertimor@p4FE752C4.dip.t-dialin.net] has joined #go-nuts
14:59 < jessta> BrowserUk: yeah?
15:00 < jessta> BrowserUk: If it's not considered a deadlock then what's the
problem?
15:00 -!- mizipzor [~mizipzor@c213-89-173-222.bredband.comhem.se] has quit [Read
error: Connection reset by peer]
15:01 < Soultaker> jessta: the 'problem' could be that latency-sensative
goroutines can be starved by long-running non-interactive goroutines
15:01 < BrowserUk> Well, if N cpu-bound goroutines getting started
concurrently (where N is dependent upon the target system), can lead to a long
period of nothing else running, it makes it hard to reason about responiveness
etc.
15:01 < Soultaker> exactly.
15:01 < jessta> Soultaker: ...yeah, so you don't do that
15:02 < BrowserUk> Soultaker: exactly.  And much more succinctly :)
15:02 < jessta> why would you do that if you have latency-sensative
goroutines?
15:02 < Soultaker> jessta: sometimes you have both
15:02 < Soultaker> e.g.  an application that does some processing but also
has a GUI
15:02 < BrowserUk> jessta: How do you avoid iit, when you cannot know how
many cores the system your code will run on might have?
15:02 < Soultaker> I ran into it myself when implementing an AI routine that
I wanted to abort after X seconds
15:03 < Soultaker> which didn't work because the timer package uses a
goroutine for scheduling
15:03 < jessta> it's your code, you yield when you think it's appropriate
15:03 < Soultaker> so when you've used up all the threads with computation,
the timer routine won't get to run and abort them.
15:03 < BrowserUk> jessta: You need to respond to socket comms?  (Is that
vberbotten?)
15:04 < jessta> call runtime.GoSched()
15:04 < Soultaker> jessta: having to sprinkle all goroutines with yields
just to retain some responsiveness is quite ugly
15:04 < Soultaker> it might also be inefficient
15:04 < Soultaker> in C, I'd create a threadpool for the computation
threads, and run separate threads that require some responsiveness in their own OS
thread.
15:05 < Soultaker> but Go (currently) offers no control over which routines
run where
15:05 -!- surma [~surma@77-21-91-152-dynip.superkabel.de] has joined #go-nuts
15:05 < BrowserUk> So, you tune your cpu-bound processes for a 4-core
system, and when run on a 2-core system, it doesn't yeild often enough.  But on a
48-core system, it wastes 50% of available cycles yielding unnecessarially.
15:06 < jessta> Soultaker: you can use runtime.LockOSThread()
15:06 < jessta> 50%?
15:06 * BrowserUk is glad that Soultaker also see's the problem.  He was beginning
to think he was alone in missing the magic fix.
15:07 < Soultaker> ah, that's actually nice.  I guess the timer package
should do that, except that's a terrible idea if the total number of threads is
way low.
15:07 < jessta> I don't think a yield is that expensive
15:07 -!- Abablaba1 [~Abablabab@93-96-78-46.zone4.bethere.co.uk] has left #go-nuts
[]
15:08 < Soultaker> I dunno.  I think it could be.  The problem is that you
need more yields the more responsiveness you want.
15:08 -!- Abablabab [~Abablabab@93-96-78-46.zone4.bethere.co.uk] has joined
#go-nuts
15:08 < Abablabab> under what circumstances should the runtime decide not to
yeild?  or do they never yeild
15:08 < Soultaker> and even aside from efficiency, it's kindof ugly
15:09 < jessta> Soultaker: most of the time you won't even see the yields
15:09 < Namegduf> Have you ever used Go?
15:09 < Namegduf> I'm curious because
15:09 -!- mitsuhiko [~mitsuhiko@ubuntu/member/mitsuhiko] has quit [Excess Flood]
15:09 < BrowserUk> If you are tuned for 4 cores: YOu;ve got 44 cores do
nothing but yields on their timeslices; or you have your 4 cpu-processes
shuttleing around between 12 cores each.
15:09 < Namegduf> Go code does not typically contain explicit yields
15:09 < jessta> since they are channel send/recv, syscalls,allocations etc.
15:09 < Soultaker> Namegduf: yes, I've used it in the exact scenario I
described above ;)
15:10 < Soultaker> and then I ran into the timer problems, and I didn't know
an elegant way to fix it.
15:10 < Abablabab> right, so it will yeild itself if you do anything
blocking or a call?
15:10 < Abablabab> which makes sense really
15:10 < Soultaker> jessta: agreed, of course, but in a purely computational
process you might not do any of that.
15:10 < Namegduf> The only time you need to worry about explicit yields is
when you're not merely going CPU bound, but going CPU bound without doing any of
the expensive things that go makes include occasional yields.
15:10 < Namegduf> *Go
15:11 < Namegduf> BrowserUk: That's also wrong.
15:11 < jessta> Soultaker: and so you have to explictly yield
15:11 < jessta> but your case it rare
15:11 < Soultaker> I feel like we're going in a circle =)
15:11 < Soultaker> well, I happened to run into it naturally, and thought
"this isn't nice"
15:12 < Soultaker> so I wouldn't be surprised if it comes up again later
15:12 * BrowserUk nods.
15:12 < Namegduf> BrowserUk: Firstly, with Go, you do not typically "tune" a
program to a specific number of cores- you'd spawn a number of gorountines
appropriate to the task and let the runtime schedule them across threads to match
the count of cores.
15:12 < jessta> a purely computational gorutines will usually be reporting
back on progress or something
15:12 < jessta> updating the GUI progress bar, logging something
15:12 < Namegduf> BrowserUk: Secondly, I doubt you'd have 48 threads with
only 4 goroutines, all being scheduled, at all.
15:12 -!- mitsuhiko [~mitsuhiko@ubuntu/member/mitsuhiko] has joined #go-nuts
15:13 < Soultaker> jessta: does your database software do that halfway
through a query?
15:13 < Namegduf> BrowserUk: Finally, if you DID "tune" for a specific CPU
count, one would think you'd set the maximum processors to said count, which would
always mean you'd never get more compute threads than that.
15:13 < Namegduf> Individual queries taking that long is quite ow
15:13 < BrowserUk> Namegbuf: If you have no explicit yeilds, you cannot
tune.  But then you can do nothing about responsiveness when the number of
concurrent cpu-bound goroutines equals or exceeds the nuber of cores.
15:14 < BrowserUk> And that brings us back to the original problem again.
15:14 < Namegduf> BrowserUk: You're misunderstanding.
15:15 < Namegduf> (Also, *yield, it isn't hard to spell)
15:15 < jessta> BrowserUk: you can just spawn more threads
15:15 < jessta> BrowserUk: by caling runtime.LockOSThread()
15:15 < Soultaker> Just to be clear, I don't think the problem is a fatal
flaw in Go. I agree that you can work around it.
15:15 < Namegduf> BrowserUk: When you have CPU bound goroutines which don't
yield on their own, is the sole time you ever would have explicit yields.
15:16 < Namegduf> BrowserUk: So you would have explicit yields in the case
you're bringing up and not otherwise.
15:16 < BrowserUk> Namegduf: No, I don't.  You simply aren't open to the
possibility o fthere being a problem.
15:16 < Soultaker> I'm just saying it's not very nice, while a lot of other
stuff in Go *is* very nice, so maybe how goroutines/threads are managed may need
to be revisited.
15:16 < Namegduf> BrowserUk: No, I'm examining your claimed issue and
finding it faulty on grounds of being based on an invalid assumption.
15:16 < BrowserUk> No, you being a pedantic ass.
15:17 < Namegduf> This isn't a very constructive reply to my point.
15:17 < BrowserUk> Jessta: What good does it do to spawn another thread?
15:18 < BrowserUk> " *yield, it isn't hard to spell" isn't a point worth
responding too.
15:19 < jessta> BrowserUk: it won't interfer with the other goroutines
15:20 < Namegduf> BrowserUk: ...okay.  My understanding is, your point that
"Goroutines do not explicitly yield" and "CPU bound goroutines without certain
(common) things will not implicitly yield" means that other goroutines will be
blocked by CPU-bound goroutines.
15:21 -!- xenplex [~xenplex@195.46.241.226] has quit [Quit: xenplex]
15:21 < Namegduf> BrowserUk: My point was that the first, "Goroutines do not
explicitly yield" was not correct.  Goroutines do not typically explicitly yield,
but they have to in the case of such CPU bound goroutines, for exactly that
reason.  And that negates the problem.
15:23 < jessta> Soultaker: how else would you do it?
15:23 < BrowserUk> Namegduf: No it doesn't negate it.  Because, if you have
to explicitly yield to maintain responsiveness, then you need to "tune for the
number of available cores"; and that goes belly-up badly when your binary runs on
a different number of cores.
15:23 < Namegduf> Er, no
15:23 < Namegduf> Why?
15:24 < Namegduf> You need to tune for a specific core speed perhaps, to run
for a certain (small, but large in comparison to the time spent computing) time
before yielding in each CPU-bound goroutine
15:24 < BrowserUk> See the eirlier iteration of this repeat of the
discussion: REF: 2-cores/ 4-cores /48-cores.  BBQ hot; must cook &
15:25 < Namegduf> I read that, I was...  here and all.  You just asserted
that Go would have 44 threads spinning doing nothing if it spawned 4 CPU
goroutines on a 48 core system without ever backing that up.
15:25 < jessta> BrowserUk: how else would you do it?
15:27 < Namegduf> Formally, I am disputing that "if you have to explicitly
yield" you need to "tune for the number of available cores" and requesting backing
for that, anyways.  If your previous discussion provides it you need to explain
that.
15:29 -!- slashus2 [~slashus2@74-137-24-74.dhcp.insightbb.com] has joined #go-nuts
15:30 -!- andrewh_ [~andrewh@94-194-56-42.zone8.bethere.co.uk] has joined #go-nuts
15:36 -!- Netsplit *.net <-> *.split quits: ShadowIce`, clip9, Keltia,
werdan7, Abablabab, amb
15:37 -!- Netsplit over, joins: ShadowIce`
15:39 -!- amb [amb@SEPIA.MIT.EDU] has joined #go-nuts
15:41 -!- Keltia [roberto@aran.keltia.net] has joined #go-nuts
15:42 -!- TR2N [email@89-180-148-21.net.novis.pt] has joined #go-nuts
15:42 -!- clip9 [tj@12.81-166-62.customer.lyse.net] has joined #go-nuts
15:43 -!- werdan7 [~w7@freenode/staff/wikimedia.werdan7] has joined #go-nuts
15:43 -!- illya77 [~illya77@206-214-133-95.pool.ukrtel.net] has joined #go-nuts
15:43 -!- Abablabab [~Abablabab@93-96-78-46.zone4.bethere.co.uk] has joined
#go-nuts
15:44 -!- mertimor [~mertimor@p4FE752C4.dip.t-dialin.net] has quit [Quit:
mertimor]
15:48 < BrowserUk> Namegduf: It isn;t speculation, but hard one experience.
If you have to maintain say 1/10 second responsiveness, and you have a CPU-bound
task to process concurrently.  You wan to use as many cores for that as possible.
That means 1 thread per core.  Less underutilising the cores; more means wasting
cycles context switching.  So, to maintain responsiveness you need to insert
yields into your cpu-bound code.  Insert too few, your responsiveness falls
15:48 < BrowserUk> beow requirements; insert too many and you're wasting
cycles.  So, you have to "tune".  But that tuning becomes specific to the system
you tune for.  And sub-optimal; and often grossly so; on systems other than that
for which you've tuned.  Most good concurrency systems avoid the need for
expliicit yields for that very (common, well-known and documented) scenario.
15:48 * BrowserUk is still cooking but will be back.
15:50 -!- hcatlin [~hcatlin@pdpc/supporter/professional/hcatlin] has quit [Ping
timeout: 260 seconds]
15:51 -!- Tiger_ [~chatzilla@74.86.0.138] has joined #go-nuts
15:52 < Namegduf> So the reason you can't, say, consistently yield so a CPU
bound goroutine spends 0.1% or 0.01% of its time yielding
15:52 < Namegduf> Is because that could yield more often than you need, and
that would be absolutely terrible?
15:52 -!- zeitgeist [~chatzilla@222.73.189.44] has quit [Ping timeout: 276
seconds]
15:53 < Namegduf> I mean, I guess if your requirement is "yield once per X
milliseconds" then you'd need to yield at a rate fitting with the goroutine count,
and simply yielding that often PER compute thread is wastage, but it seems like a
ridiculously small amount of wastage to worry about.
15:55 < Namegduf> I think you'd have to be dealing with stuff running on a
realtime OS not to get much worse done to you by the OS's scheduler, anyway, in
which case doing specialised tuning for performance doesn't seem so unreasonable.
15:59 < jessta> BrowserUk: so again, how else would you do it?
16:00 < Namegduf> On further thought, your point appears to boil down to
"yielding at any rate above the minimum will cost performaince, and explicit
yields do that if not tuned to processor count", but the problem is...  implicit
yields will do that, as well.  In fact, any system for yielding pretty much will
unless tuned for processor count and your specific requirements
16:00 -!- hcatlin [~hcatlin@pdpc/supporter/professional/hcatlin] has joined
#go-nuts
16:00 < Namegduf> So while you could call that a "problem" in Go, it's a
problem also possessed by the implicit yields and every other scheduling system
I've heard of
16:02 < Namegduf> So yeah, you're going to have to propose a better
solution.
16:03 < jessta> if you want preemptive multitasking, use OS threads
16:04 < jessta> if you want to automatically tune co-operative multitasking
you can insert yields at compile time
16:06 < Namegduf> Or calculate the yield rate at runtime, even, based on
number of compute goroutines spawned
16:07 < Namegduf> It all seems like a lot of mess worrying about a tiny
amount of performance, with a small performance cost being an inherent part of
most all scheduling systems I've ever heard of.
16:08 < jessta> BrowserUk: how would you handle the problem you describe in
C?
16:13 -!- Xera^ [~brit@87-194-208-246.bethere.co.uk] has quit [Ping timeout: 252
seconds]
16:27 -!- Xera^ [~brit@87-194-208-246.bethere.co.uk] has joined #go-nuts
16:30 < kimelto> moin
16:33 -!- hcatlin [~hcatlin@pdpc/supporter/professional/hcatlin] has quit [Quit:
hcatlin]
16:40 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
16:43 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit
[Client Quit]
16:45 -!- xenplex [~xenplex@195.46.241.226] has joined #go-nuts
16:45 -!- napsy [~luka@88.200.96.14] has quit [Ping timeout: 264 seconds]
17:01 -!- smw_ [~stephen@pool-96-232-88-231.nycmny.fios.verizon.net] has quit
[Ping timeout: 240 seconds]
17:02 -!- g0bl1n [~anonymous@a213-22-77-195.cpe.netcabo.pt] has left #go-nuts []
17:14 -!- perdix [~perdix@sxemacs/devel/perdix] has joined #go-nuts
17:26 -!- b00m_chef [~watr@d64-180-45-230.bchsia.telus.net] has joined #go-nuts
17:27 -!- napsy [~luka@88.200.96.14] has joined #go-nuts
17:28 -!- deso [~deso@x0561a.wh30.tu-dresden.de] has joined #go-nuts
17:32 -!- MizardX [~MizardX@unaffiliated/mizardx] has joined #go-nuts
17:37 -!- lmoura [~lauromour@200.184.118.130] has quit [Quit: Leaving]
17:47 < Soultaker> "Soultaker: how else would you do it?Soultaker: how else
would you do it?"
17:48 < Soultaker> argh, paste fail.
17:48 < Soultaker> anyway, what I meant to say, was that with more low-level
control over threads you could put the worker threads in their own threadpool
separate of the interactive stuff
17:49 < Namegduf> http://golang.org/pkg/runtime/#LockOSThread
17:49 < Namegduf> As repeatedly mentioend
17:50 < Namegduf> Should be sufficient to do that
17:50 < Soultaker> ok, that does seem to be the best way to do it, but it
doesn't help with the http.Server or the time.Timer stuff ;)
17:50 < Soultaker> but no use restarting the discussion I guess
17:55 -!- skelterjohn [~jasmuth@lawn-net168-in.rutgers.edu] has joined #go-nuts
17:56 -!- skelterjohn [~jasmuth@lawn-net168-in.rutgers.edu] has quit [Client Quit]
18:08 -!- meatmanek [~meatmanek@c-76-21-205-249.hsd1.va.comcast.net] has quit
[Quit: This computer has gone to sleep]
18:10 -!- b00m_chef [~watr@d64-180-45-230.bchsia.telus.net] has quit [Ping
timeout: 240 seconds]
18:14 -!- ShadowIce` [pyoro@unaffiliated/shadowice-x841044] has quit [Ping
timeout: 240 seconds]
18:15 -!- Ideal [~Ideal@ideal-1-pt.tunnel.tserv6.fra1.ipv6.he.net] has joined
#go-nuts
18:21 -!- slashus2 [~slashus2@74-137-24-74.dhcp.insightbb.com] has quit [Quit:
slashus2]
18:30 -!- thaorius [~thaorius@190.247.193.207] has joined #go-nuts
18:33 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
18:33 -!- Null-A [~Null-A@c-98-210-102-188.hsd1.ca.comcast.net] has joined
#go-nuts
18:34 < BrowserUk> Namegduf: LockOSThread As repeatedly mentioend (SP?  :) )
Should be sufficient to do that.  But: "LockOSThread wires the calling goroutine
to its current operating system thread...it will always execute in that thread,
and no other goroutine can.".  So, you lock your high-response routine to one
thread--and on a 2 core system waste 50% of the processing power for the cpu-bound
task.
18:34 < Namegduf> BrowserUk: You lock two compute threads to different
threads you tool
18:35 < Namegduf> *compute goroutines
18:35 -!- wrtp [~rog@89.242.170.31] has quit [Quit: wrtp]
18:35 -!- ikaros [~ikaros@f051118043.adsl.alicedsl.de] has joined #go-nuts
18:39 < BrowserUk> Namegduf: Do you have the timecost (in cycles) of:
getContext(CPU-bound), add to scheduler tables; remove context of responseTask
from tables; setContext( respTask); spin, nothing to do; yield() to scheduler;
getContext( respTask); add to tables; cpu-bound-context = remove from tables;
setContext( cpu-bound-context ); yield;................?  (And have you considered
how many (say) adds and multiplies you could do in that time?  (Now multiply that
18:39 < BrowserUk> wastage by 48-cores?
18:40 < Namegduf> If your wastage is only increasing linearly with the
number of cores, consider yourself lucky
18:40 < Namegduf> BrowserUk: The answer is "a tiny fraction of the number of
adds and multiplies you are doing"
18:40 < Namegduf> BrowserUk: Regardless
18:40 < BrowserUk> Namegduf: On a two-core system, you lock two
compute-bound routines to both cores, and you have ZERO responsiveness.
18:40 < Namegduf> BrowserUk: Um, no
18:40 < Namegduf> BrowserUk: You have a third thread being scheduled by the
OS as needed
18:42 < BrowserUk> And this bit: ".it will always execute in that thread,
and *no other goroutine can.*"?
18:42 < Namegduf> What about it?
18:42 < BrowserUk> You interprete that to mean that no other goroutine
*within that thread*.
18:42 < BrowserUk> Or "no other goroutine within the current process"?
18:43 < Namegduf> The former, because that's accurate.
18:43 < BrowserUk> You know that for sure?
18:43 < Namegduf> I know it because in English, that sentence parses as
18:43 < Namegduf> "It will always execute in that thread, and no other
goroutine can execcute in that thread."
18:44 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit [Quit:
skelterjohn]
18:44 < BrowserUk> Then, you are basically making your arguments up as you
go along--because *THAT@S NOT WHAT ACTUALLY HAPPENS*
18:44 < Namegduf> Then file a bug
18:44 < BrowserUk> Judge: I'm no more use for this guy!
18:45 < BrowserUk> *I've
18:47 -!- mertimor [~mertimor@p4FE752C4.dip.t-dialin.net] has joined #go-nuts
18:48 < BrowserUk> And we're back to where I came in--"Has anyone seen any
documentation/talks/conceptual overview of go's concurrency stuff?"...because, I'm
having to guess about the interpretation of the scant information available, and
it isn't doing what I think it should, but I could be misinterpreting that scant
information.  Hence my seeking clarification.  (Preferably from people who know,
not just make up what they think it might do!)
18:56 -!- illya77 [~illya77@206-214-133-95.pool.ukrtel.net] has quit [Read error:
Connection reset by peer]
18:57 < BrowserUk> jessta: In C: 1 + number of cores OS threads.  I higher
priority monitoring the responsive problem.  The other N run flat out (no yields
or context switches), at 'normal' priority, on the cpu-bound task.  The
pre-emption + priority ensures that the responsive thread gets a timeslot whenever
it is able to run.  No yielding, means no tuning for the cpu-bound threads.  They
just make full use of every timeslice the OS gives them.  What I'm trying to get a
18:57 < BrowserUk> handle on, is how having cooperative threads mixed with
OS threads affects things.
19:00 < jessta> BrowserUk: you could do the same thing in Go
19:00 < Namegduf> Yes, using LockOSThread.
19:02 < BrowserUk> jessta: if I follow the same pattern and start N OS
threads for the cpu-bound tasks; and have the original run the responsive task; is
that enough?  If the coroutine scheduler only schedules goroutines originating
within a give OS threads, within that thread, then you can reason about that.  But
if (as mentioned earlier) go will distribute goroutines between threads, you
cannot.  (because one all N cpu-bound rourines are running 1 per core, the
19:02 < BrowserUk> responsiveness thread is frozen out.)
19:02 < BrowserUk> And that's (kinda) what I'm seeing.
19:04 < BrowserUk> (But we already know that go on my OS is relatively flaky
and fragile compared to the main stream...so it could just be palying catchup.
(Hence earlier requests for contact with other non-mainstream GO developers)
19:05 < BrowserUk> But that's a channel freezer question here it seems.
19:07 -!- Kashia [~Kashia@p4FEB40E7.dip.t-dialin.net] has joined #go-nuts
19:08 < BrowserUk> It all comes back to trying to understand how the
cooperative scheduler mixes with OS threads.  Does it do it's thng entirely with
an OS thread; or does it transend OS threads.
19:09 < Namegduf> ...okay, I've gone and looked at the scheduler source so I
can explain this the long way.
19:09 < Namegduf> You have so many goroutines, and so many threads.
19:09 < Namegduf> When a thread yields, it calls into the scheduler and gets
a new goroutine to run.
19:10 < Namegduf> When a goroutine is started, if MAXPROCs is not reached,
it starts a new thread to run it on.
19:10 < Namegduf> There's only one goroutine list, here called "G"
19:10 < Namegduf> That's a rough understanding, and it may have
inaccuracies, but that is roughly how I thought it worked.
19:11 < Namegduf> "src/pkg/runtime/proc.c" is where I'm looking and seems a
good place to look for more information.
19:12 < Namegduf> Go has absolutely nothing to do with the operating
system's thread scheduler, so threads are scheduled themselves as normal.
19:18 -!- alehorst [~alehorst@189.26.56.41.dynamic.adsl.gvt.net.br] has quit [Read
error: Connection reset by peer]
19:21 -!- alehorst [~alehorst@187.59.66.96] has joined #go-nuts
19:25 < BrowserUk> "Go has absolutely nothing to do with the operating
system's thread scheduler, so threads are scheduled themselves as normal." That is
so condescending.
19:27 -!- andrewh_ [~andrewh@94-194-56-42.zone8.bethere.co.uk] has quit [Ping
timeout: 260 seconds]
19:28 -!- exch [~nuada@h144170.upc-h.chello.nl] has quit [Ping timeout: 240
seconds]
19:28 -!- exch [~nuada@h144170.upc-h.chello.nl] has joined #go-nuts
19:29 < BrowserUk> And completely belies the existance of LockOSThread().
19:29 < Namegduf> How so?
19:30 < Namegduf> LockOSThread() alters how goroutines are scheduled onto
threads.  Not how threads are scheduled onto cores.
19:36 -!- Fish [~Fish@bus77-2-82-244-150-190.fbx.proxad.net] has joined #go-nuts
19:44 < BrowserUk> Okay.  Two cores.  Three OS threads.  Two OS threads
running CPU-bound goroutine.  The third, the high responsiveness, short duration
goroutine.  A point in time: both CPU-bound threads/goroutines are running.  One
of them, completes it current OS timeslice.  At some point in the future, the resp
thread/goroutine gets a slot.  It runs, the responsive goroutine finds nothing to
do, and quickly yields to the Go scheduler.  Still lots of OS timeslice left,
19:44 < BrowserUk> so it looks for something else to schedule.  There is
nothing, because neither of the other go routines have yielded back to the
scheduler--one is still running and the other was pre-empted.  So, what does it do
with the rest of its timeslice?
19:45 < Namegduf> Huh?
19:45 < Namegduf> Oh
19:45 < Namegduf> What do all other programs do when they've nothing to do?
19:45 < Namegduf> I would assume the thread goes to sleep
19:46 < BrowserUk> Sleep.
19:48 < Namegduf> There'll be a separate OS thread for every goroutine
waiting on a syscall, so when those complete meaning there's something to do, it
should get woken up as normal.  I *think* that's how it works.  Obviously it does
work somehow.
19:49 < Namegduf> (That's over simplified from even what I know, I'm pretty
sure Go uses poll() and friends for multiple goroutines all blocked on fds and
similar)
19:49 < BrowserUk> But why would the GO scheduler go to sleep when it has
the responsiveness thread that just yeilded to it sitting in its table?  Why not
just yield back to it?
19:49 < Namegduf> Well, it depends.
19:49 < BrowserUk> Not so sure now?
19:49 < Namegduf> Stop being a douche, will you?  The answer is fairly
simple.
19:50 < Namegduf> If the goroutine in that thread has yielded rather than
becoming blocked, it presumably would
19:50 < Namegduf> I'm guessing from an impression of this that it's readded
to the list of goroutines to run, and then the thread gets a new one from the top
19:50 < Namegduf> Which in this case just happens to be the same one
19:51 < Namegduf> If the goroutine has "yielded" by blocking, which is what
would happen when it wasn't actually doing any work
19:51 < Namegduf> Then obviously it'd stop being runnable until whatever it
blocked on has something happen
19:51 < Namegduf> And there's no goroutine to run
19:52 < jessta> BrowserUk: what does this responsiveness thread do?
19:52 < jessta> is it waiting for user input?
19:53 < Namegduf> What would really happen, probably, is that GOMAXPROCS
would be set to one above the real processor count
19:53 < Namegduf> And there's "processsor count" compute threads.
19:53 < Namegduf> And goroutines locked to them.
19:54 < Namegduf> All the other goroutines would presumably in this model be
able to use the other thread whenever they unblocked.
19:55 -!- c0nn0r [~cnnr@CODEX.wbb.net.cable.rogers.com] has joined #go-nuts
19:55 -!- c0nn0r [~cnnr@CODEX.wbb.net.cable.rogers.com] has quit [Client Quit]
19:56 -!- c0nn0r [~cnnr@CODEX.wbb.net.cable.rogers.com] has joined #go-nuts
19:56 -!- c0nn0r [~cnnr@CODEX.wbb.net.cable.rogers.com] has left #go-nuts []
19:56 < BrowserUk> jessta: It might (for example) be monitoring a windows
message queue--but equally could be checking for an inbound connection, or a file
turning up in a directory.  Or all 3.
19:56 < Namegduf> Well
19:57 < Namegduf> A *thread* doesn't do anything but run goroutines
19:57 < Namegduf> You'd have a goroutine for each of those scheduled onto
the thread whenever it wanted to do anything.
19:57 < jessta> BrowserUk: so, really it makes a call to the OS and sleeps
19:57 < Namegduf> But yes.
19:57 < BrowserUk> How does one ignore a user?
19:58 < BrowserUk> Probably jessta.
20:01 < BrowserUk> Namegduf: "A *thread* doesn't do anything but run
goroutines" And you accuse me of being a "douche"?  Is it beyond you, given the
laborious and meticulous way I described the scenario, to extrapolate "the thread
running the responsiveness goroutine" from "this responsiveness thread"?
20:02 < Namegduf> Yeah, I guess I did around the time you were personally
insulting me for "It depends.", which was directly followed by an explanation of
what it depends on and the behaviour in each case.
20:03 < Namegduf> And you wouldn't have a "responsiveness goroutine" doing
"all three", you'd have one for each, and it's relevant to how blocking on
syscalls and then scheduling stuff when they return is done, so I felt it was
worth mentioning.
20:03 < BrowserUk> jessta: I guess you're saying: why would the
responsiveness goroutine ever yield to the go scheduler.  And I think you're
right.
20:04 < Namegduf> Yer goroutine will yield every time it makes a syscall,
which is required for all three of those examples.
20:05 < Namegduf> Well, it'll call into the scheduler, you can draw a
distinction between "yielding" and "becoming blocked", and it's the latter.
20:06 < BrowserUk> And you hit the nail on the head Namegduf.  If the
responsiveness thread is actually running several responisiveness goroutines--one
for the message queue; one for diirectory; one for the sockets; then it will yield
to the scheduler.  And when it yields, on of its possibilities for the next
routine to run will be the CPU-bound goroutine that was just interupted.  If the
other three are blocking on syscalls...
20:07 < Namegduf> Hmm?
20:07 < jessta> I think this discussion needs to result in some kind of blog
post with diagrams
20:07 < Namegduf> Yeah, pretty pictures
20:07 < Namegduf> BrowserUk: I think I see the problem here
20:08 < Namegduf> BrowserUk: No, a thread which is stopped by the OS, does
not stop the goroutine running within it and make it free to be scheduled
20:08 < BrowserUk> Anyway, maybe I've at least convinced you guys that my
desire to want to understand a bit more about the go scheduler isn't a complete
waste of time.
20:08 < BrowserUk> I have to leave...but I'll try to come back with pictures
and a simplified code example.
20:09 < Namegduf> THat's not possible with preemptive multitasking, although
it might be kind of possible to emulate
20:09 -!- pct [~pct@deep.tw] has quit [Ping timeout: 240 seconds]
20:09 -!- tsung_ [~jon@112.104.53.151] has quit [Read error: Operation timed out]
20:09 < Namegduf> So your CPU-bound goroutine would not be free to be
scheduled just because its bound thread got interrupted
20:10 < BrowserUk> I just want get sufficient imformation to allow me to
reason about what is happening.  Perhaps then I can decide if whether when things
go differently to my expectations, whether it is a bug--an design limitation, or
simply user error.  BBl.
20:11 < Soultaker> I think I understand the behaviour, but I still think
that behaviour can be problematic in some cases.
20:12 < Soultaker> might not be a bad idea to write up some details of the
concurrency model though (if it's a language feature)
20:13 -!- scarabx [~scarabx@c-76-19-43-200.hsd1.ma.comcast.net] has quit [Quit:
This computer has gone to sleep]
20:14 < Namegduf> http://golang.org/doc/effective_go.html#concurrency is
perhaps too simple to answer some of these questions.
20:14 < jessta> Soultaker: it's not really a language feature, it's more an
implementation detail
20:15 < Namegduf> I'll admit it is limited in that it doesn't mention
explicit yields, which are rarely needed but probably need a mention as a rare
necessity in their specific circumstances.
20:19 -!- rv2733 [~rv2733@c-98-242-168-49.hsd1.fl.comcast.net] has quit [Quit:
Leaving]
20:24 -!- Xera` [~brit@87-194-208-246.bethere.co.uk] has joined #go-nuts
20:26 -!- xenplex [~xenplex@195.46.241.226] has quit [Quit: xenplex]
20:26 -!- Xera^ [~brit@87-194-208-246.bethere.co.uk] has quit [Ping timeout: 276
seconds]
20:31 -!- gospch [~gospch@unaffiliated/gospch] has joined #go-nuts
20:31 -!- napsy [~luka@88.200.96.14] has quit [Ping timeout: 240 seconds]
20:33 -!- gospch [~gospch@unaffiliated/gospch] has quit [Read error: Connection
reset by peer]
20:36 -!- napsy [~luka@88.200.96.14] has joined #go-nuts
20:50 -!- abunner [~abunner@c-71-198-231-134.hsd1.ca.comcast.net] has joined
#go-nuts
20:51 -!- Shyde [~shyde@HSI-KBW-078-043-070-132.hsi4.kabel-badenwuerttemberg.de]
has joined #go-nuts
20:54 -!- alehorst [~alehorst@187.59.66.96] has quit [Remote host closed the
connection]
20:54 < BrowserUk> Namegduf: (This is just a drive-by.  bbal): "BrowserUk:
No, a thread which is stopped by the OS, does not stop the goroutine running
within it and make it free to be scheduled" The problem with that is that it
assumes that there is no scope for (cooperative) parallelism in the cpu-bound
algorithm.  (Ill-thought through example: MatMult on very large matrix: One could
see breaking a large MatMult into four parts for concurrency; but then subdividing
20:54 < BrowserUk> each part into cooroutines:
http://www.slideshare.net/pkpramit/matrix-multiplicationan-example-of-concurrent-programming.
Then, each cpu-bound thread might re-enter the scheduler tables at the same time
as the resp.thread/g; and things start getting confused as to when the resp.thread
wll next get scheduled.
20:54 < Namegduf> No
20:55 < Namegduf> The CPU bound thread will never reenter the scheduler
tables
20:55 < Namegduf> Er
20:55 < Namegduf> Clarification.
20:55 < Namegduf> Threads and goroutines remain separate.
20:55 < Namegduf> The CPU bound goroutine will never reenter the scheduler
tables.
20:56 -!- alehorst [~alehorst@187.59.66.96] has joined #go-nuts
20:56 < Namegduf> Unless it blocks on something, in which case I believe it
will be in there, but no thread other than the thread it is bound to will run it
20:56 < Namegduf> And the thread it is bound to will go to sleep until the
goroutine is unblocked
20:57 < Namegduf> Whether a goroutine is running on a thread and whether a
thread is running on a core look to be separate; if you've seen docs otherwise I
could be wrong, but I'd be surprised, there's no reason they'd be the same.
20:58 < Namegduf> If one of the CPU-bound goroutine threads is interrupted,
the goroutine is still on that thread, that thread just isn't running, which I
don't think Go cares about.
20:58 < Namegduf> If you knew that, then I'm misunderstanding the issue.
21:02 < BrowserUk> Did you read the second approach under the heading
"Proposed approach" on slide 2?
21:02 < Namegduf> Yes, that approach is incompatible with the "use
LockOSThread" idea.
21:02 < Namegduf> Well, I think so
21:03 < Namegduf> Hmm
21:03 < Namegduf> I might be seeing what you mean.
21:03 < Namegduf> It might work perfectly fine, and just create a lot of OS
threads with only gomaxprocs running at once.
21:03 < Namegduf> Is that what you're expecting?
21:04 < Namegduf> In that case, though, you're not limiting the number of
compute threads to below gomaxprocs
21:05 < Namegduf> And thus could quite well end up with compute threads
running up to gomaxprocs and no free threads for goroutines becoming unblocked due
to user input/other events.
21:05 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
21:06 < Namegduf> If your point is that you can't easily use that idea to
only run GOMAXPROCs - 1 compute threads, with GOMAXPROCS equal to processors + 1
21:07 < Namegduf> Then that does seem valid and you'd have to use other
methods.  Unfortunately, all methods suck for that, you need a *clever* scheduler
to basically always schedule the IO-bound thing quickly while giving all the rest
of the time to CPU-bound things
21:07 < Namegduf> There's a whole bunch of ideas for OS scheduling and it's
a big design issue for OS-level thread/process scheduling with a number of
approaches
21:07 < Namegduf> But Go does not cleverly do that for goroutines.
21:08 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit
[Client Quit]
21:08 < Namegduf> Go uses a cheap FIFO, as I have heard/read.
21:13 < Namegduf> Basically even if you keep yielding or even get preempted,
if you make lots and lots of CPU bound things and have a few IO-bound things, you
need cleverness in scheduling algorithm or grouping or something to introduce bias
towards scheduling the IO-bound things when available.
21:15 -!- Null-A [~Null-A@c-98-210-102-188.hsd1.ca.comcast.net] has quit [Quit:
Null-A]
21:17 -!- Null-A [~Null-A@c-98-210-102-188.hsd1.ca.comcast.net] has joined
#go-nuts
21:21 -!- ShadowIce` [pyoro@unaffiliated/shadowice-x841044] has joined #go-nuts
21:21 -!- Fish [~Fish@bus77-2-82-244-150-190.fbx.proxad.net] has quit [Read error:
Connection reset by peer]
21:26 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
21:26 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit
[Client Quit]
21:26 -!- tsung [~jon@112.104.53.151] has joined #go-nuts
21:27 -!- fhs [~fhs@pool-71-167-84-226.nycmny.east.verizon.net] has quit [Quit:
leaving]
21:27 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has joined
#go-nuts
21:33 -!- Ideal [~Ideal@ideal-1-pt.tunnel.tserv6.fra1.ipv6.he.net] has quit [Quit:
Ideal]
21:34 -!- skelterjohn [~jasmuth@c-76-99-92-14.hsd1.nj.comcast.net] has quit [Quit:
skelterjohn]
21:39 -!- napsy [~luka@88.200.96.14] has quit [Ping timeout: 260 seconds]
21:44 -!- Shyde [~shyde@HSI-KBW-078-043-070-132.hsi4.kabel-badenwuerttemberg.de]
has quit [Remote host closed the connection]
21:46 -!- napsy [~luka@88.200.96.14] has joined #go-nuts
21:49 -!- mertimor [~mertimor@p4FE752C4.dip.t-dialin.net] has quit [Quit:
mertimor]
21:58 -!- deso [~deso@x0561a.wh30.tu-dresden.de] has quit [Read error: Connection
reset by peer]
22:02 -!- kel__ [~kel@cpc2-leat2-0-0-cust98.hers.cable.ntl.com] has quit [Ping
timeout: 265 seconds]
22:07 -!- alehorst [~alehorst@187.59.66.96] has quit [Remote host closed the
connection]
22:09 -!- surma [~surma@77-21-91-152-dynip.superkabel.de] has quit [Ping timeout:
260 seconds]
22:18 -!- rhelmer [~rhelmer@adsl-69-107-89-5.dsl.pltn13.pacbell.net] has quit
[Quit: rhelmer]
22:31 -!- ikaros [~ikaros@f051118043.adsl.alicedsl.de] has quit [Quit: Leave the
magic to Houdini]
22:35 -!- mertimor [~mertimor@p4FE752C4.dip.t-dialin.net] has joined #go-nuts
22:37 -!- ShadowIce` [pyoro@unaffiliated/shadowice-x841044] has quit [Quit:
Verlassend]
22:39 -!- perdix [~perdix@sxemacs/devel/perdix] has quit [Quit: A cow.  A
trampoline.  Together they fight crime!]
22:41 < Abablabab> is it possible to serialise processes in go?
22:41 < BrowserUk> Namegduf: With pure OS threads (and even fibers within OS
threads), using priorities is sufficient to ensure that low-latency threads gets a
look in in the face of cpu-bound threads (and cpu-bounds fibers).  But, OS level
abstractions of threading and locking and shared state is aweful and go provides
(potentially) the best abstraction of all that I've yet seen.  And maybe,
(carefully) combining OS thread priorities with goroutines and channels, is
22:41 < BrowserUk> an effective strategy, but being careful requires
knowledge.  I was (am) just seeking that knowledge.  Your quick-fire dismissals of
any concerns (frequently combined with condescending attitude and/or pedantic
nitpicking), are not particularly helpful.  "not so sure now?" is a valid
observation/question in the light of your preceding decent from outright
dismissal, to hesitance, to backtracking, to maybe....  But many of your insights
are
22:41 < BrowserUk> valuable....  hard to ignore.  G'night.
22:46 -!- BrowserUk [~irc1_20_B@92.15.81.219] has quit [Ping timeout: 245 seconds]
22:54 -!- Boney [~paul@124-168-76-47.dyn.iinet.net.au] has quit [Ping timeout: 240
seconds]
22:57 -!- Boney [~paul@203-217-87-237.dyn.iinet.net.au] has joined #go-nuts
22:57 < Project_2501> bye o.o/
22:58 -!- Project_2501 [~Marvin@82.84.74.54] has quit [Quit: E se abbasso questa
leva che succ...]
23:03 -!- werdan7 [~w7@freenode/staff/wikimedia.werdan7] has quit [Ping timeout:
619 seconds]
23:09 -!- jimi_hendrix [~jimi@unaffiliated/jimihendrix] has joined #go-nuts
23:09 < jimi_hendrix> how does gccgo compare to the other compilers?
23:10 -!- werdan7 [~w7@freenode/staff/wikimedia.werdan7] has joined #go-nuts
23:13 < exch> jimi_hendrix: From what people have mentione din this channel
before: gccgo produces faster code than 6g/8g/etc.  But there seems to be a
difference in the way it deals with goroutines, making cross-routine communication
a fair bit slower.  I have no experience with gccgo though, so I can't confirm
this
23:13 < Abablabab> afaik it uses threads for goroutines
23:13 < jimi_hendrix> ah
23:14 < Abablabab> meaning you'd get better performance from the g* comilers
23:14 < jimi_hendrix> Abablabab, if i use g* that is
23:14 < Abablabab> yeah, sorry
23:15 < Abablabab> I hate to sound like spam but does anyone know if Go
processs can be serialised?
23:18 -!- zachk [~geisthaus@unaffiliated/zachk] has joined #go-nuts
23:26 -!- zachk1 [~geisthaus@unaffiliated/zachk] has joined #go-nuts
23:26 < jimi_hendrix> but if i do not use goroutines (they do not really
lend themselves to the program i am planning on writing) gccgo will be faster
23:26 -!- zachk1 [~geisthaus@unaffiliated/zachk] has left #go-nuts []
23:26 < Abablabab> jimi_hendrix: im not sure why you'd use go if you had no
interest in concurrancy at all
23:26 -!- zachk [~geisthaus@unaffiliated/zachk] has quit [Ping timeout: 245
seconds]
23:26 < Namegduf> Go is awesome in lots of other ways
23:27 < Namegduf> A C-ish level language designed for elegant simplicity,
with GC, is pretty cool.
23:27 < Abablabab> it's nice, but i think there are other languages that you
could use quite readily
23:27 -!- General13372 [~support@71-84-50-230.dhcp.mtpk.ca.charter.com] has joined
#go-nuts
23:27 -!- scm [justme@80.171.71.228] has quit [Ping timeout: 264 seconds]
23:27 < Abablabab> the only reason i say that is go still feel very
incomplete as a full language
23:28 < Namegduf> Other people get different "feel"s.
23:28 < Abablabab> true that
23:29 < jimi_hendrix> Abablabab, what other languages have C-level design
with gc
23:29 -!- rlab_ [~Miranda@91.200.158.34] has quit [Read error: Connection reset by
peer]
23:31 -!- General1337 [~support@71-84-50-230.dhcp.mtpk.ca.charter.com] has quit
[Ping timeout: 260 seconds]
23:31 < Abablabab> there are GC 'addition' for C, but i see your point
23:32 < jimi_hendrix> D has promise, but i find it feels more incomplete
than go.  at least go has one standard library
23:33 < Abablabab> I like the community with go as well
23:35 < jimi_hendrix> yes
23:35 < Abablabab> and if you dont think go has a feature yo want, come and
moan in here and get your ass handed to you by smarter people
23:36 < jimi_hendrix> :D
23:36 < Abablabab> that's happened to me once or twice..  or more
23:36 < Abablabab> but im a student, im alowed to be stupid
23:36 < jimi_hendrix> go needs to have a builtin function to produce HL2
episodes that will score at least a 90 on metacritic
23:37 < Abablabab> python has that i think
23:37 < Abablabab> import valve
23:37 < Abablabab> but if you want it to work in any reasonable time you
need to import future as well
23:37 < jimi_hendrix> err what version of python
23:37 < jimi_hendrix> 2.6.5 isnt giving it to me
23:38 < Soultaker> yeah, you need 4.  breaks compatibility again, though, so
no distro ships it yet.
23:38 < jimi_hendrix> lol
23:39 < Soultaker> but seriously, I've been using Go in some toy projects
23:39 < Soultaker> and in my experience the language is fundamentally good,
but the standard library is lacking in places
23:39 < Soultaker> which is of course something that's likely to be remedied
given time.
23:40 < Abablabab> I need to look into the serilsation of goroutines, or
find another language to build my dissertation on
23:40 < Soultaker> also the language is relatively spartan in some regards
(especially compared to e.g.  C++)
23:40 < Soultaker> Abablabab: what do you mean by that exactly?
23:41 < Abablabab> Soultaker: if you spawn a non dependant goroutine which
does, say, prime calculation
23:41 < Abablabab> the ability to serialise it, and save it's state that
way, or move it to a different enviroment
23:42 < Abablabab> and then 'unpack' it back into a running process
23:42 < Soultaker> ah hmm, I don't think that is supported
23:42 < Soultaker> altough you can serialize closures
23:42 < Abablabab> CSP languages like Occam have that, they just dont have
anything else :p
23:42 < Abablabab> closures?
23:42 < Soultaker> basically, the go routine before you run it ;)
23:43 < Abablabab> oh, that's pretty cool
23:43 -!- vsayer [~vivek@c-76-103-244-154.hsd1.ca.comcast.net] has joined #go-nuts
23:43 < Soultaker> I know neither CSP nor Occam so I don't really know how
that's supposed to work with a running go routine though
23:43 < Soultaker> do those languages offer shared memory as well as
communication over channels?
23:43 -!- tsykoduk [~tsykoduk@2001:470:1f04:671:20d:93ff:fe77:1dc4] has quit [Read
error: Operation timed out]
23:43 < Abablabab> yeah
23:44 < Abablabab> it's pretty crazy stuff, you can freeze a process, and
then move it over a channel of type process into a compatable enviroment, and
unfreeze it
23:44 < Abablabab> meaning you can send them over networks and things
23:44 < Soultaker> I don't think Go is ready for that right now
23:45 < Soultaker> especially since the memory space cannot be shared
23:45 < Soultaker> I mean, is not shared between separate processes
23:45 < Abablabab> well, you could impliment a reasonably nieve version of
it
23:45 < Soultaker> (though there is already support for networked channels,
which is at least part of what you'd need)
23:45 -!- tsykoduk [~tsykoduk@2001:470:1f04:671:20d:93ff:fe77:1dc4] has joined
#go-nuts
23:46 < Soultaker> I don't think the runtime is able to suspend or move
goroutines.  Goroutines aren't first-class object.
23:46 < Soultaker> +s
23:46 < Soultaker> Again, I'm far from an authority on the subject.  But
from reading the Language Specification it doesn't seem like something that Go can
do.
23:47 < Abablabab> if you can serilise a not-started goroutine, you could
send one to a compatable envirment with a 'state' of the operating routine, the
routine would need to be written in such a way but in theory you'd be able to pick
up where you left off
23:47 < Soultaker> By the way, why not use occam or so if that does what you
want?
23:47 < Abablabab> because it's a pig, it's a CSP safe language, so you have
to do freaky stuff if you want anything that's non determinable at comple time
23:48 -!- tsykoduk [~tsykoduk@2001:470:1f04:671:20d:93ff:fe77:1dc4] has quit [Read
error: Operation timed out]
23:48 < Soultaker> hmmm...  there may be a reason for that
23:48 < Abablabab> there are many good reasons for it
23:49 < Abablabab> but it means you cant really do anything with it apart
from embedded systems and such
23:49 < Abablabab> erlang has some amazing features espically in
concurrancy, but im incaipable of working with functional languages
23:51 -!- tsykoduk [~tsykoduk@2001:470:1f04:671:20d:93ff:fe77:1dc4] has joined
#go-nuts
23:53 < Abablabab> So im quite stuck, i'd really like to use Go for my
dissertation, i guess i'll have to change my angle of attack
23:55 < Soultaker> probably....  what's your dissertation about?
23:55 < Abablabab> distributed process oriented systems
23:56 < Soultaker> that general?  or something more specific?
23:56 < Abablabab> it's pretty general right now, im seeing where i can go
with it
23:57 < Abablabab> something in process failover or distributed processing
--- Log closed Sun May 23 00:00:25 2010