Jump to content

Jurgis

Member
  • Posts

    6,027
  • Joined

  • Last visited

Everything posted by Jurgis

  1. Presumably markets value "real companies" higher than tracking stocks. It is easier to sell the company.
  2. Munger sounds annoyed like heck.
  3. Perhaps I should ask if people are paying taxes on cryptocurrency gains. 8)
  4. Yeah, ethics is a harsh mistress. (But then I repeat myself). Clearly there is no single ethics that people agree on. And even though I am no ethics expert, from what I've seen ethicists do not agree on single ethics theory. And ethics theories are possibly either too simplistic (pure utilitarianism) or too complicated to use in real life. On the other hand, I think there are three areas where this is worthwhile to consider: 1. You can try to figure out what you want to do in your life. Do you want to donate/volunteer/help and how much? https://www.thelifeyoucansave.org/ BTW, one of the arguments in ethics course by Peter Singer is that you might do most good going Buffett way - earning craptons of money and giving it to charity vs. just volunteering for free. So you can be investor and ethical too! 8) Another part of this is that you can think whether buying a fancy meal in restaurant is better than buying simpler one and donating X to people in extreme poverty. Or other scenarios that might come up as you are thinking about this. 2. As we are progressing towards future where people can wield weapons of mass destruction single handedly, is it possible to agree on at least some common morality and ethics? Can we get there? Or are we doomed to strife/terrorism/wars to extinction? How can we get there? Ideally we'd all be libertarian communists 8) loving each other and supporting each other because of the greatness of our hearts and minds. But is there a path to that? 3. AI. What kind of ethics systems can we impart on (super)human AI. Yeah, we can brush off this as "we'll do a variety of systems and it will work out", but that's even worse than human-based terrorism. If superhuman AI decides that Chinese are evil, it could just nuke them... ::) Anyway, interesting but very tough area. 8) Peace and love and all that :-*
  5. BTW, it should be possible to apply similar techniques to merger outcome prediction and get better results in merger arbitrage. However, IMO the legal prediction has fatter returns if you can do it.
  6. I think this could be a great side-way for investing returns: http://www.sciencemag.org/news/2017/05/artificial-intelligence-prevails-predicting-supreme-court-decisions Assuming the technology can be applied to more accurately predict patent litigation (techs/biotechs), or other financial litigation (Fannie, BK/post-BK cases), there should be a yuge opportunity for outsized returns, since I think most people invest in these areas by guesswork that has yuge biases. Instead you could have a more accurate probability of win and use more precise Kelly for investing. I won't work on this, since it's way outside my area of expertise, but someone who does could make $$$$$. You've heard it here first. 8)
  7. Just got back... and Warren's quarreling with Charlie... what happened. 8) Edit: you guys don't have to answer ... it was rhetoric question. Maybe first time I saw them talking cross each other. 8) Tough topic. 8)
  8. Snowball is great book. 8)
  9. Buffett said not buying Google was a mistake. He said Bezos is extraordinary CEO though he did not explicitly say that not buying Amazon was a mistake. I think he came close, but he did not say it explicitly. No mention of FB... yet. 8)
  10. 10% compounding incoming. 8) (might be understated as in the past)
  11. Oh, no, KO controversy again. :-\ I stopped drinking sodas period. My quality of life only improved. So there Warren and Charlie. ;D 8)
  12. sounds like plays on cannibals Yeah, that's a good observation Gamecock-YT
  13. With IBM and airlines it looks like BRK is moving out of the moat-hold-forever investments into buy-somewhat-crappy-business-somewhat-cheap-sell-it-hopefully-a-bit-higher... ::)
  14. IMO directors do very little in these kinds of situations. Canning current WFC directors might send a message that directors should look at it more. Not sure it would make much difference. IMO the management and company culture matters more. I don't really know if Buffett's belief in WFC is wellsubstantiated ;). We'll have to see how it goes from here.
  15. Works on IE11. Was not working for me on Opera. Ha, that's interesting! I just also found that out. Works on IE 11, but not Chrome. GAAATTTTEESS! Thanks! Yeah, I was thinking that Opera and Chrome might have similar issues, since Opera is based on Chromium (Chrome open source codebase). There are differences between the two, so I was not confident enough to say that Chrome might have same behavior.
  16. I sold out in mid 2016 at break even. The business deteriorated more than what I was expecting so my estimate of IV went down. I estimated IV to be a tad above $200 per share in 2013 and expected it to grow at a modest pace annually. So I bought it when was first available at about $150. My estimate of how the business would progress turned out to be wrong and my IV estimate went down as well. When there did not seem to be much margin of safety I sold. I was conscious of the anchoring bias and I like to think that selling at break even is not due to that but who knows. Buffett's investment is quite consciously an important factor in my purchase primarily because it means that blow up risk is likely low. There is no reason to ignore such an important data point. Vinod Thanks for update. Even though you crossed out highly respected, I think you and KCLarkin post deep well prepared analyses. It would have been nice if you guys posted when your opinion has changed, but of course there's no obligation for you to do that. 8) Good luck. Disclosure: No position. I did not clone this or any other vinod1/KCLarkin positions or rely on their arguments to make investment decisions. 8)
  17. I sold out in early 2016 (at a big loss). I just couldn't get any confidence from the CFO that they could or would stabilize the core business. Thanks for update.
  18. It's likely not there yet in terms of long term predictions/investing. Most of these models now do short term predictions that may or may not work. I think they will get to long term predictions within 10 years or so. Although there's a lot of short-term/technical/price-based prediction mentality in financial AI/ML. Not many (any?) people are doing fundamentals-based longer term predictions.
  19. Isn't it just tracking India market results and underperforming it too? Not sure if this works: http://www.marketwatch.com/investing/stock/iif/charts?symb=IIF&countrycode=US&time=8&startdata-ipsquote-timestamp=1%2F4%2F1999&enddata-ipsquote-timestamp=5%2F5%2F2017&freq=1&compidx=none&compind=none&comptemptext=FFXDF&comp=FFXDF&uf=7168&ma=1&maval=50&lf=1&lf2=4&lf3=0&type=2&size=2&style=1013 I compared to IIF since that's the fund I know from way past. There might be better India funds currently. Maybe not completely Fair ;) comparison since they had a bunch of cash in the beginning...
  20. Why not? The guy was right. He was dissed on this board and elsewhere. Will these people now come out and say that they were wrong? Somehow I doubt it. Some highly respected CoBF posters made quite convincing bullish arguments for IBM on this board. Do they still believe it's a good long term investment? Were the arguments (subconsciously) influenced by Buffett's position?
  21. I know! I know! Tesla should sell Model 3 for 3.1 MILLION!
  22. What is their competition? How big/branded is the competition and how big/branded is CynergisTek? Are they competing with small fry companies or are there whales in this area? I don't have a back channel into this industry, so I don't know jack. Print services from what I've heard is possibly stable with some lock in, but also somewhat crappy business. I'm not sure there's expansion runway. Probably not something to get excited about.
  23. Another crickets thread. 8) I am listening. 8) © NSA 8)
  24. I've never understood why the trolley problem is a problem. If I know one group of people and not the other (regardless of group size) I flip the switch to save the group I know. If one group is more attractive than the other, I flip it to kill the uglier group. Otherwise why is it my problem? I leave it alone and yell, "Hey you, idiots, get off the tracks!". Because we are making the trolley problem decisions every day of our lives. The cat decision from another thread "spend 3K for your cat surgery or save X cats in a shelter instead" is close to pure example. However, you can even think about "spend Y on a restaurant/computer game/fancy dress or save Z people in Africa" ( https://www.thelifeyoucansave.org/ ) as a trolley problem. It's even more skewed in terms of overall value vs. your own convenience value, but we (unconsciously?) choose the first every day (or at least most of the time). Your answer is one choice: base the decision on tribal/familial/emotional attachments. Some people are OK with that, some people disagree. It may lead to very egoistical decisions if taken to extreme. On the other hand utilitarian answers to this have issues too. I don't think it's fake problem in larger context. So I am not completely trolling. (And I posted it on this thread as an example that triggers hot button reactions from people.) Bonus question: if we develop (super) human AI, what kind of ethics should it follow? What should it do in the trolley situations? Should it give you gourmet meals while people are starving in Africa? (Note that fully utilitarian ethics for AI would also have issues...) Edit: there is a side issue to trolley problem that some people cannot consciously kill people even if it means saving others. I find that part less interesting to me.
×
×
  • Create New...