Outline for an information theoretic search engine

@jeff_catlin ..treats the answer set as "reverb", except the 1-best document - the first ranked document; the document with the strongest signal. https://t.co/zXVRFPU0WV
@maismail @etzioni containing documents about full containers, along with a document explaining about milk cartons. https://t.co/zXVRFPU0WV Am I being clear?
@fchollet Have not implemented this, but may outperform DL for conversational AI - much less computational complexity, no model building. https://t.co/zXVRFPU0WV Consistent underlying philosophy: view language as information, not as grammar (rules).
RT @kooswilt: @DeepMindAI @GoogleHealth Entropy minimization is one method of performing autofocus on X-ray images. It can be applied to a…
@DeepMindAI @GoogleHealth Entropy minimization can be one result of Shannon's noisy channel model. Think of an answer set as a signal & reverb. Apply de-reverbization (take only the strongest version of the reverb signals), and we are done. https://t.co/zXVRFPU0WV
@DeepMindAI @GoogleHealth Entropy minimization is one method of performing autofocus on X-ray images. It can be applied to a host of similar problems: RADAR backscatter and....search engines. https://t.co/zXVRFPU0WV
@jeff_catlin For Conversational AI with less model building (save $$$) and of less computational complexity, try this: https://t.co/zXVRFPU0WV
@pfau Another thing: these models are of great computational complexity and building the language models is expensive. For search engines/conversational AI, I offer this alternative. https://t.co/zXVRFPU0WV
@WIRED Dr Allison uses text mining and conversational AI to access medical text (maybe PubMed?). This design for medical AI views the answer set as original copy & reverberation, then uses the noisy channel model to remove the reverb and retain the one copy. https://t.co/zXVRFPU0WV
@GaryMarcus Conversational AI (e.g. Siri) has roots in a sender - receiver model. Let's bring this relatively simple model back wherever appropriate. https://t.co/zXVRFPU0WV Deep Learning skeptics: Gary Marcus, Francois Challot, Riza C Berkan. Conv. AI: there's more than one way to do it.
@drnic1 Maybe this design for multi-turn conversational AI will help remedy the problem? https://t.co/zXVRFPU0WV
RT @kooswilt: @sschoenholz Maybe a great example of this is the study of telecom and its relatedness to physics. Turns out it's possible t…
@sschoenholz Maybe a great example of this is the study of telecom and its relatedness to physics. Turns out it's possible to construct a search engine (telecom) w/ the same math used to fight RADAR back scatter. https://t.co/zXVRFPU0WV
@ylecun My .002 cts: for conversational AI, at least, DL is overkill. Information theory enables us to eliminate reverb, a form of noise, so we can apply the noisy channel model. Every document in an answer set is reverb, except the one with the lowest entropy. https://t.co/zXVRFPU0WV
@AnimaAnandkumar Not knowledgeable enough to discourse on Deep Learning, but might "getting closer to the text" be a way to defeat bias? Ditch Panini and Chomsky with their difficult and expensive linguistic analyses, and let's focus on the process of communication. https://t.co/zXVRFPU0WV
@emilymbender @GaryMarcus My design (sheer information theory) requires no model building, and has low computational complexity. Could that be preferable for some cases, or a class of cases? Comments welcome. https://t.co/zXVRFPU0WV
@emilymbender @GaryMarcus Thx for posting this. Skimmed it. I found three critics of DL: Gary Marcus, Riza C Berkan and Francois Chollet. My agenda is admittedly to push my own Conversational Assistant design, and what can I say but "let the best method win". https://t.co/zXVRFPU0WV
@HealthITNews Multi-turn chat bot not based on neural nets but on straightforward information theory https://t.co/zXVRFPU0WV Criticism of DL by Francois Chollet, Gary Marcus, and Riza C Berkan.
@LofredM @ZDNet @TiernanRayTech I count three major NLP-ers who are critics of Deep Learning: Gary Marcus, Francois Chollet, and Riza G Berkan. For conversation AI I offer an alternative to DL: conversational AI based on information theory. Comments welcome. https://t.co/zXVRFPU0WV
@ProgrammerBooks @Comatose_D Chat bot design based on Claude Shannon's work; needs to be implemented as yet. Should you do so, please mention me as the architect and let me know. https://t.co/zXVRFPU0WV
@CXemotion @SethGrimes @PopSci I agree. Worker in the field: Dr Philip Resnik. I envision relatively tiny chat bots (conversational AI, ex. Siri, Cortana) - tiny in the sense of narrow, specialized scope, e.g. sociopathy, but wide scope in terms of covering the entire Internet). https://t.co/zXVRFPU0WV
@ML_NLP Please consider this alternative to BERT: https://t.co/zXVRFPU0WV no language model training.
@HealthcareWen @Transtechlab @eboyden3 @MIT @nmanaloto @CATALAIZE @nicolecwong @ruima @TransTech200 @danielchao @mollymaloofmd @sydneyskybetter @SpatialK @dngoo Innovation is economics/technology-related intertwined not the same thing as science, which is usually reductionist, collapses phenomena (apple falling, planets circling earth); RADAR and search engines brought together (both use similar maths): https://t.co/zXVRFPU0WV
@FortuneMagazine @hspter As far as Stich Fix's "astrophysicist approach", there is a relationship between RADAR (heavily used by physicists probing the universe), search engines (heavily studied by NLP people, and medical imaging (autofucus for sharper pics.) https://t.co/zXVRFPU0WV
@SantoshStyles @stanfordnlp Multi-turn dialogue model: query, cut-and-paste from answer doc., repeat. Treats text as information not as language: requires very little linguistic analysis. Uses reductionist approach (science) and shown to be like RADAR and autofocus (med. imag.) https://t.co/zXVRFPU0WV