Avertir le modérateur

21/01/2019

Dans le monde avec moi...Regardez avec moi ! ...La start-up envisage d'octroyer une licence au premier chargeur de voiture électrique à hydrogène

Dans le monde avec moi...Global4.gifnetworking.jpeg

megaphone-clip-art-9cp4KXRcE.jpegRegardez avec moi

Imaginez ce que nous pouvons faire ensemble

FRANCE WEB

Chez moi, au bureau, en ville...

Soutenir la société..Voici la technologie pour soutenir une société d'où vous ne pouvez pas la voir

La start-up envisage d'octroyer une licence au premier chargeur de voiture électrique à hydrogène

Au Royaume-Uni, AFC Energy a démontré ce qu'il pensait être le premier chargeur de véhicule électrique au monde basé sur la technologie de la pile à combustible à l'hydrogène.

La démonstration du système CH 2 ARGE d’AFC Energy s’est déroulée à l’aérodrome de Dunsfold, chargeant une BMW i8 comme la toute première voiture à être rechargée avec de l’énergie générée par une pile à combustible à hydrogène. Le chargeur utilise une pile à combustible avec un onduleur pour charger les véhicules, en utilisant un bloc de batterie de 48 V pour répondre aux besoins en énergie de pointe.

Le système de démonstration était dimensionné pour fournir suffisamment de puissance pour recharger deux VE simultanément aux niveaux de recharge 1, 2 ou 3. Les onduleurs du système sont contrôlés via le système de contrôle de pile à combustible d’AFC Energy, qui assure un contrôle sûr et précis de l’ensemble de l’installation. L'intégration dans le système de contrôle d'AFC Energy signifie que les solutions de produits peuvent être mises en œuvre avec les capacités de charge intelligente.

Après 10 années de développement de la recherche sur les piles à combustible, la société se prépare à commercialiser des solutions EV Charge basées sur les piles à combustible afin de répondre à la demande croissante d'énergie respectueuse de l'environnement sur le marché des véhicules électriques. Le gouvernement britannique vise à ce que la moitié des ventes de voitures neuves soient des véhicules électriques d'ici 2030, ce qui donnera neuf millions de véhicules électriques sur les routes. D'ici 2040, il est prévu que 100% des ventes de voitures neuves soient des véhicules électriques, ce qui permettra à l'ensemble du parc britannique des 36 millions de voitures de devenir des véhicules électriques.

Annonce les modèles de systèmes d'alimentation au Royaume-Uni et en Allemagne

Selon les estimations du réseau national britannique, pour recharger le parc de véhicules électriques, il faudrait augmenter la production de 8 GW, tandis que les calculs d’AFC Energy montrent que si un véhicule électrique sur dix était rechargé simultanément au Royaume-Uni, la demande augmenterait de façon record. de 25,7 GW basé sur une batterie moyenne de 57 kWh. Cette demande de pointe maximale correspond à environ la moitié des besoins générationnels actuels du Royaume-Uni et équivaut à 7,9 nouvelles centrales nucléaires ou 17 100 éoliennes. Les sites populaires tels que les centres sportifs, les stades et les supermarchés devront également développer les solutions de recharge des véhicules électriques; un scénario où 25% des véhicules sont des véhicules électriques et la moitié de la connexion pour charger sur le site nécessiterait 11,5 MW de production d'électricité. Des investissements importants dans de nouvelles centrales et la modernisation du réseau de distribution seraient nécessaires à moins que ces exigences ne soient satisfaites par une production d'électricité localisée.

Le système CH 2 ARGE pourrait potentiellement fournir de l'électricité générée localement par le biais de milliers d'installations produisant de l'électricité 100% propre. En revanche, la fourniture d'électricité par le biais d'une centrale de production nécessiterait des investissements massifs dans de nouvelles capacités de production et une nouvelle architecture du réseau de distribution.

TENDANCES DES VÉHICULES ÉLECTRIQUES POUR 2019

«D'ici 2030, on estime qu'il pourrait y avoir neuf millions de véhicules électriques sur les routes britanniques, contre 90 000 aujourd'hui», a déclaré Adam Bond, directeur général d'AFC Energy. «Pour cette transition, nous avons besoin de stations de charge intégrées dans tout le pays, ainsi que de la recherche de solutions innovantes pour surmonter les limites sévères de l’électricité produite de manière centralisée. En développant et en démontrant l’efficacité de notre pile à hydrogène dans l’application de la recharge des véhicules électriques, AFC Energy a montré qu’elle était prête à montrer la voie en matière de résolution du problème de l’augmentation de la demande d’électricité, mais également dans le cadre d’une réduction des émissions zéro. approche."

AFC Energy cherche à entamer des discussions avec des partenaires et fournisseurs OEM potentiels pour la production de ses systèmes CH 2 ARGEscalable EV Charge en vue de leur déploiement commercial.

www.afcenergy.com

Histoires connexes:

Un onduleur dense renforce les véhicules électriques.pdf

Fenêtre voltaique en pérovskite.pdf

Toyota et Panasonic envisagent la batterie gigaventure.pdf

Renesas.pdf

Les moteurs de l'électromobilité en 2019.pdf

Fenêtre voltaique en pérovskite.pdf

TIC-TOC.pdf

azstarna, a footprinting tool for robots.pdf

Association CharIN.pdf

Convertisseurs DC-DC XP Power.pdf

Google exploite des panneaux solaires de 1,6 m pour alimenter des centres de données américains.pdf

 

Science doesn’t stop in winter,Connect and collaborate with colleagues, peers, co-authors, and specialists.,Computational thinking and media & information literacy: An integrated approach to teaching twenty-first century skills

networking.jpegmegaphone-clip-art-9cp4KXRcE.jpeg

Chez moi, au bureau, en ville...

Soutenir la société..Voici la technologie pour soutenir une société d'où vous ne pouvez pas la voir

The best in science,ResearchGate,Connect and collaborate with colleagues, peers, co-authors, and specialists.,Computational thinking and media & information literacy: An integrated approach to teaching twenty-first century skills

Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.

Jeannette M. Wing, née le 4 décembre 1956, est professeur d'informatique à l'Université Carnegie-Mellon. Elle est directrice du département d'informatique. Elle a reçu son Bachelor et son Master au Massachusetts Institute of Technology en 1979, puis son PhD en 1983, toujours au MIT. Wikipédia

Computational thinking and media & information literacy: An integrated approach to teaching twenty-first century skills

Abstract

Developing students’ 21st century skills, including creativity, critical thinking, and problem solving, has been a prevailing concern in our globalized and hyper-connected society. One of the key components for students to accomplish this is to take part in today’s participatory culture, which involves becoming creators of knowledge rather than being passive consumers of information. The advancement and accessibility of computing technologies has the potential to engage students in this process. Drawing from the recent publication of two educational frameworks in the fields of computational thinking and media & information literacy and from their practical applications, this article proposes an integrated approach to develop students’ 21st century skills that supports educators’ integration of 21st century skills in the classroom.

Computational Thinking.pdf

References

  1. Appleyard, N., & McLean, L. (2011). Expecting the exceptional: pre-service professional development in global education. International Journal of Progressive Education, 7(2), 6–32.Google Scholar
  2. Armoni, M., Meerbaum-Salant, O., & Ben-Ari, M. (2015). From scratch to “real” programming. ACM Transactions on Computing Education, 14(4), 25.CrossRefGoogle Scholar
  3. Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: what is involved and what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.CrossRefGoogle Scholar
  4. Boulianne, S. (2015). Social media use and participation: a meta-analysis of current research. Information, Communication & Society, 18(5), 524–538.CrossRefGoogle Scholar
  5. Brennan, K. (2009). Scratch-Ed: An online community for scratch educators. In: Proceedings of the 9th international conference on Computer supported collaborative learning-Volume 2 (pp. 76–78). International Society of the Learning Sciences.Google Scholar
  6. Brennan, K. (2014). Social dimensions of computing education. NSF Future Directions in Computing Education Summit. Retrieved from: http://web.stanford.edu/~coopers/2013Summit/BrennanKarenHarvard.pdf.
  7. Brookshear, J. G. (1997). Computer science: an overview (5th ed.). Reading, MA: Addison-Wesley.Google Scholar
  8. Buckingham, D. (2007). Beyond technology: children’s learning in the age of digital culture. Cambridge: Polity.Google Scholar
  9. Buckingham, D. (2015). Do we really need media education 2.0? Teaching media in the age of participatory culture. In T. Lin, D. Chen, & V. Chai (Eds.), New media and learning in the 21st century (pp. 9–21). Singapore: Springer.Google Scholar
  10. Carroll, J. (2014). Soft versus hard: the essential tension. In D. Galletta & P. Zhang (Eds.), Human-computer interaction in management information systems (pp. 424–432). Armonk, NY: Sharpe.Google Scholar
  11. Cogan, J., Derricott, R., & Derricott, R. (2014). Citizenship for the 21st century: an international perspective on education. New York: Routledge.Google Scholar
  12. College Board. (2014). Advanced Placement Computer Science Principles: Curriculum framework. Retrieved from: http://secure-media.collegeboard.org/digitalServices/pdf/ap/ap-computer-science-principles-curriculum-framework.pdf.
  13. Davies, R. S. (2011). Understanding technology literacy: a framework for evaluating educational technology integration. TechTrends, 55(5), 45–52.CrossRefGoogle Scholar
  14. Devlin-Foltz, B. (2010). Teachers for the global age: a call to action for funders. Teaching Education, 21(1), 113–117.CrossRefGoogle Scholar
  15. Eisenberg, M. B., Lowe, C. A., & Spitzer, K. L. (2004). Information literacy: essential skills for the information age. Westport: Greenwood.Google Scholar
  16. Felini, D. (2015). Crossing the bridge: literacy between school education and contemporary cultures. Research on Teaching Literacy Through the Communicative and Visual Arts, 2, 19–25.Google Scholar
  17. Frau-Meigs, D. (2007). Media Education. A Kit for Teachers, Students, Parents and Professionals. Retrieved from: http://portal.unesco.org/ci/en/ev.php-URL_ID=27056&URL_DO=DO_TOPIC&URL_SECTION=201.html.
  18. Gee, J. P. (2004). Situated language and learning: a critique of traditional schooling. New York: Routledge.Google Scholar
  19. Goodwin, M., & Sommervold, C. (2012). Creativity, critical thinking, and communication: strategies to increase students’ skills. Plymouth: R&L Education.Google Scholar
  20. Governors Association Center for Best Practices. (2010). Common core state standards for english language arts. Retrieved from: http://www.corestandards.org/ELA-Literacy/.
  21. Grizzle, A., Moore, P., Dezuanni, M., Asthana, S., Wilson, C., Banda, F., & Onumah, C. (2014). Media and information literacy: policy and strategy guidelines. UNESCO. Retrieved from: http://unesdoc.unesco.org/images/0022/002256/225606e.pdf.
  22. Grover, S., & Pea, R. (2013). Computational thinking in K–12: a review of the state of the field. Educational Researcher, 42(1), 38–43.CrossRefGoogle Scholar
  23. Guo, L. (2014). Preparing teachers to educate for 21st century global citizenship: envisioning and enacting. Journal of Global Citizenship & Equity Education, 4(1), 1–23.Google Scholar
  24. Gurram, D., Babu, B. V., & Pellakuri, V. (2014). Issues and challenges in advertising on the web. International Journal of Electrical and Computer Engineering, 4(5), 810–816.Google Scholar
  25. Hobbs, R., & Jensen, A. (2009). The past, present, and future of media literacy education. The Journal of Media Literacy Education, 1(1), 1–11.Google Scholar
  26. Hollandsworth, R., Dowdy, L., & Donovan, J. (2011). Digital citizenship in K-12: it takes a village. TechTrends, 55(4), 37–47.CrossRefGoogle Scholar
  27. International Society for Technology in Education (ISTE). (2015). ISTE standards for students. Retrieved from: https://www.iste.org/docs/pdfs/20-14_ISTE_Standards-S_PDF.pdf.
  28. Jenkins, H. (2006). Confronting the challenges of participatory culture: Media education for the 21st century. John D. and Catherine T. MacArthur Foundation.Google Scholar
  29. Jenkins, H. (2009). Confronting the challenges of participatory culture: media education for the 21st century. Cambridge, MA: MIT Press.Google Scholar
  30. Lankshear, C., & Knobel, M. (2008). Digital literacies: concepts, policies and practices. New York: Peter Lang.Google Scholar
  31. Lee, J., Jr. (2009). Scratch programming for teens. Boston: Cengage.Google Scholar
  32. Lenhart, A. (2015). Teens, social media and technology overview 2015. Washington DC: Pew Research Center. Retrieved from: http://www.pewinternet.org/files/2015/04/PI_TeensandTech_Update2015_0409151.pdf.Google Scholar
  33. Livingstone, S. (2011). Media literacy: ambitions, policies and measures. COST. Retrieved from: http://www.cost-transforming-audiences.eu/system/files/cost_media_literacy_report.pdf.
  34. McDougall, J., & Livingstone, S. (2014). Media and information literacy policies in the UK. London School of Economics. Retrieved from: http://eprints.bournemouth.ac.uk/21522/2/McDougall_Livingstone_MIL_in_UK.pdf.
  35. McLean, L., Cook, S., & Crowe, T. (2006). Educating the next generation of global citizens through teacher education, one new teacher at a time. Canadian Social Studies Journal, 40(1), 1–10.Google Scholar
  36. Mommers, J. (2014). Media education in four EU countries: common problems and possible solutions. My child online foundation. Retrieved from: http://www.kennisnet.nl/fileadmin/contentelementen/kennisnet/Dossier_mediawijsheid/Publicaties/rapport_media_onderwijs_EU.pdf.
  37. National Research Council. (2010). Report of a workshop on the scope and nature of computational thinking. The National Academies Press.Google Scholar
  38. National Research Council. (2011). Report of a workshop on pedagogical aspects of computational thinking. The National Academies Press.Google Scholar
  39. Next Generation Science Standards (NGSS). (2013). The next generation science standards. Retrieved from: http://www.nextgenscience.org/next-generation-science-standards.
  40. Partnership for 21st Century Skills (P21). (2014). Framework for state action on global education. Retrieved from: http://www.p21.org/storage/documents/Global_Education/P21_State_Framewor_on_Global_Education.pdf.
  41. Peppler, K., Santo, R., Gresalfi, M., Tekinbas, K. S., & Sweeney, L. B. (2014). Script changers: digital storytelling with scratch. Cambridge, MA: MIT Press.Google Scholar
  42. Perković, L., Settle, A., Hwang, S., & Jones, J. (2010). A framework for computational thinking across the curriculum. In: Proceedings of the fifteenth annual conference on Innovation and technology in computer science education (pp. 123–127). ACM.Google Scholar
  43. Qualls, J. A., & Sherrell, L. B. (2010). Why computational thinking should be integrated into the curriculum. Journal of Computing Sciences in Colleges, 25(5), 66–71.Google Scholar
  44. Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., et al. (2009). Scratch: programming for all. Communications of the ACM, 52(11), 60–67.CrossRefGoogle Scholar
  45. Shelby-Caffey, C., Úbéda, E., & Jenkins, B. (2014). Digital storytelling revisited. The Reading Teacher, 68(3), 191–199.CrossRefGoogle Scholar
  46. The College Board. (2014). AP computer science principles curriculum framework. New York: College Board.Google Scholar
  47. Thomas, N. P. (2004). Information literacy and information skills instruction: applying research to practice in the school library media center. Westport, CT: Libraries Unltd Incorporated.Google Scholar
  48. Tucker, A. (2003). A model curriculum for K-12 computer science: Final report of the ACM K-12 task force curriculum committee. Retrieved from: https://www.acm.org/education/education/curric_vols/k12final1022.pdf.
  49. Wartella, E., O’Keefe, B., & Scantlin, R. (2000). Children and interactive media: a compendium of current research and directions for the future. New York, NY: Markle Foundation.Google Scholar
  50. Wilson, E. O. (1999). Consilience: the unity of knowledge. New York, NY: Vintage.Google Scholar
  51. Wilson, C., Grizzle, A., Tuazon, R., Akyempong, K., & Cheung, C. K. (2013). Media and information literacy curriculum for teachers. UNESCO. Retrieved from: http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/pdf/media_and_information_literacy_curriculum_for_teachers_en.pdf.
  52. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–5.CrossRefGoogle Scholar
  53. Wing, J. (2008). Computational thinking and thinking about computing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1881), 3717–25.CrossRefGoogle Scholar
  54. Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational thinking in elementary and secondary teacher education. ACM Transactions on Computing Education, 14(1), 1–16.CrossRefGoogle Scholar

Keywords

Computational thinking Consilience Media & information literacy Participatory culture Scratch Twenty-first century skills 

ResearchGate

ResearchGate is the professional network for scientists and researchers. Over 15 million members from all over the world use it to share, discover, and discuss research. We're guided by our mission to connect the world of science and make research open to all.
It started when two researchers discovered first-hand that collaborating with a friend or colleague on the other side of the world was no easy task.
Founded in 2008 by physicians Dr. Ijad Madisch and Dr. Sören Hofmayer, and computer scientist Horst Fickenscher, ResearchGate has more than 15 million members today. We strive to help them make progress happen faster.
Connect and collaborate with colleagues, peers, co-authors, and specialists.
Share your publications, access millions more, and publish your data.
Get stats and find out who's been reading and citing your work.
Ask questions, get answers, and solve research problems.
Find the right job using our research-focused job board.
Share updates about your current project, and keep up with the latest research.

Target a unique scientific audience

You’ll find researchers from virtually every field of specialization, including life science, biotechnology, chemistry, medicine, engineering, computer science, and mathematics.

The best in science

89% of our audience has a post-graduate qualification, including 64 Nobel Prize winners.

Ads

These can be targeted to members based on their skills and behavior, or placed on publications to reach relevant audiences while they research.

Emails

Delivered directly to your audiences inbox, emails drive traffic and awareness to any organizational content you want to promote.

Institution Posts

Want to introduce your audience to a new product, relevant white paper, or demonstration video? Share it on your ResearchGate Institution Page, and drive traffic to it through our Ads or Emails.

Onsite lead collection

Your Institution Post can request contact details of those wanting to download your content. The form is auto-filled with members profile details so it’s faster and has more accurate information.

Speak with a scientific marketing specialist

11/01/2019

PoissySmartCity,Digital disruption is hitting all industries,‘front-to-back’ transformation, a game-changing approach that would let legacy companies quickly provide hyper-personalized products and services to their customers.

networking.jpegGlobal4.gifBringing People Closer Together

SRU-Electronics>Special Research Unit>

NewsCenter for everyone

A Day in the World....Discover the World !

News You Can Use At Your FingertipsIs Just A Click Away !

Making a better Internet > I AM, YOU ARE, WE ARE HAPPY in

The World is our Workplace. Let's work togheter.

FranceWebSharing ->Connect&Share>>Networking the World

Join the millions who trust  MyNewsCenterNavigator.

Know why, Know who, Know where, Know what.

How Companies Can Leverage Technology to Deliver Hyper-Personalized Services

The need for digital transformation in companies is obvious and urgent. But many businesses, especially those burdened by legacy systems, still struggle to transform their operations to cater to the increasingly empowered digital customer. By the time companies overhaul their IT and operational infrastructures, technological developments have already moved ahead.

Dinesh Venugopal, president of the digital unit of Mphasis, an IT services company headquartered in India, has a solution that doesn’t need a complete revamping of legacy infrastructure. He calls it the ‘front-to-back’ transformation, a game-changing approach that would let legacy companies quickly provide hyper-personalized products and services to their customers. Venugopal spoke to Knowledge@Wharton to explain why it doesn’t have to take a fortune and years of implementation to digitally transform.

An edited transcript of the conversation follows.

Knowledge@Wharton: How are companies trying to go digital today? What’s different about the way that they are approaching this issue now compared to how they might have done it in the past?

Dinesh Venugopal: Digital disruption is hitting all industries. Maybe less so in the financial service industry because of regulatory reasons, but it is affecting all industries and … different industries are coping differently. If you ask the CXOs what is keeping them awake at night, they will tell you that it’s not so much Amazon, Google or somebody like that coming and taking over their industry; they’re more concerned about the fact that customers are demanding from them the same kind of services that they’re getting from Amazon or Google. That means companies have to start delivering the same level of personalized or hyper-personalized services to these customers.

Now, if you ask a bank or a financial service institution why they’re not able to do that [now], they will tell you that they are sitting on top of a lot of legacy systems. For example, take an insurance company that may have four or five different policy administration systems built 10 or 15 years ago. What are these systems good at? They’re very good at scale. They’re very good at providing customers the right data. And they’re also good at what is known as mass personalization. That is, they can target segments of the market.

Digital disruption is hitting all industries.

What they are not is flexible. They are not systems that can be easily changed, and they are not systems that can be hyper-personalized — simply because that’s the way that they have been built. So what options do enterprises have in front of them? There are two options. One is to completely take those old core systems and modernize. Some of them are taking that approach. But the problem with that approach is that it’s not easy. It takes two, three, four years to completely modernize all of your systems. And by the time these modernization projects are done, the industry has moved on. Newer products have come along.

So what do we do? There is an approach that we call “end transformation.” It is all about starting with your end stakeholder in mind, looking at what are the specific use cases that make sense for that customer, and how can we add value to the customer and start working from there. You do that by building an intelligent middle layer, which then talks to your core systems and pulls out the data and services, and provide them using your engagement layer back to the customer.

This is a different approach simply because it helps customers get their transformation sooner. Not the entire transformation, but they get wide chunks of value sooner to the customers than ever before. That approach is what we call front-to-back transformation. It’s a key — and important — way in which you can start providing hyper-personalized services to customers.

Knowledge@Wharton: Personalization per se has been around from the earliest days of the internet. But from what you are saying, it sounds like there is a different degree of expectation on the part of customers for what hyper-personalization is all about. How can banks and other financial institutions deliver on that kind of granularity in terms of customer expectations?

Venugopal: I’ll give you two examples of how a typical, traditional organization would have looked at a service that they’re providing, and how a more completely digital company would look at the same service in providing a hyper-personalized service. Here’s a simple example of a credit card transaction. You may be in a mall and swipe your credit card. A traditional service would do this job really well, which is record your transaction, look inside and see if you have the right balance, and do all the checks, and make sure the payment goes through. It does this in a very short amount of time and it’s optimized for that. It is at scale and it is mass personalized because it is personalized for a merchant or a particular type of transaction.

First and foremost, start with the end customer in mind.

Now, if I were a digital company, I would look at not just the transaction itself but the context in which the transaction has happened. They know that you will swipe the transaction in a mall … and you might also know the same credit card company offers a discount or a coupon at a different store close by, and you’re immediately able to offer that person a 20% discount. … The chances of that person walking into the next store and making a purchase is very, very high. That is called contextualized service. It’s an example of how hyper-personalization can be used in the context of a simple credit card transaction.

I’ll give you another example of a travel insurance company. When you do travel insurance, your insurance company collects all kinds of data about you. It knows your flight information, what cities you [will be traveling] in, and then it shows you a policy. After that, in most cases, the policy and the insurance company goes silent until it’s time for you to submit a claim.

But imagine that you are in Disneyland and there is a measles outbreak. Probably the insurance company has information about whether you had vaccinations or not, and especially in the case of international travel, you would have that information. They can issue an immediate alert saying, ‘Look, there is something going on here that you need to watch out for to prevent you from falling sick and maybe even prevent a claim from happening.’ That’s loss prevention. It’s not easy to do this, because you have to now have your internal information, which is sitting in systems, marry with the external information — all the situations data, contextual data — and provide that insight back to the customer.

Knowledge@Wharton: It seems that the ability to marry different kinds of data is critical in order to make hyper-personalization possible. What are some of the challenges that companies face in trying to combine data in a different way? And specifically, what is the role of big data analytics in helping to make those kinds of connections?

Venugopal: There are three specific issues here. One is that your data in a traditional enterprise, whether it is a financial service organization or not, is scattered in multiple systems or brackets. The data is not in one system — it’s in many different systems. Second is making the contextual information real time. It’s important to provide the information of [a discounted offer nearby] to the customer who just swiped a credit card. In most cases, the information goes back into a data store and it gets reconciled overnight. By the time [it gets to the consumer,] the information is belated. It is too late if it’s not done in real time.

[Another] problem is what I call the data-dialect problem. That is, different systems speak in different data forms. A customer in one system is very different from a customer in another system, because even though you may be a bank customer, your mortgage information could be in a separate data store. So a customer in that information store is different from a customer in a retail kind of store. That’s where you have the issue of the data dialect — how are these different data going to communicate?

The best way to solve these problems is one that we call the ‘next new data’ solution. You don’t need to have all the information that you’ve stored in a system for many, many years to provide this contextual information. You just need the transaction information at the time it’s being done, plus the contextual information that is right there with you when you’re actually swiping the card. You can just use that data and make it efficient right there without having to go through multiple systems to obtain the data. We call this the usage of net data.

If I were a digital company, I would look at not just the transaction itself but the context in which the transaction has happened.

We believe it’s important to build what we call ‘knowledge models,’ instead of having data being stored in multiple systems and being pulled into one big data lake. If you have good knowledge models, you can actually solve the data dialect problem. These are two simple but important examples of how you can take advantage of what we call data in motion, and solve certain contextual data, without having to completely overhaul your entire data program.

Traditionally, what a company would have done is say, ‘I need to get some good use case out of my data, I want to get some hyper-personalized data. How do I do it? Let me do this massive data project.’ … What we [recommend] is to start with customer use cases. In this case, you are trying to provide the next best offer at the time that the merchant swipes [the consumer’s] card. Use that as a use case and say, ‘If I were to solve it, what kind of data would I need?’ That approach is what we call the front-to-back approach to hyper-personalization.

Knowledge@Wharton: In the past few years, there has been a tremendous proliferation of cloud and cognitive computing, and also AI and machine learning have been developing very rapidly. I wonder if the emergence of some of these technologies makes it easier for different types of data to be married together, and makes the hyper-personalization process easier. Do you think that’s the case?

Venugopal: Absolutely. We look at it as a spectrum. On the one hand, you have … your Excel sheet type macros. Then you add things like robotic processes and automation, which actually helps us speed up the process of either collecting data or providing a service. Then you have semi-autonomous computing, and then you have … full-fledged artificial intelligence. Each of these technologies and solutions absolutely could be used, depending on the situation, to solve very specific hyper-personalization issues. We have several examples today of how we have used everything from simple-code level automation to artificial intelligence to solve some of these problems.

Knowledge@Wharton: Can you give me some examples?

Venugopal: I’ll give you the example of how we solved the KYC problem in a bank — KYC is Know Your Customer. In a B2B bank, enterprise to enterprise, usually when you want to do KYC, you might end up [discovering that your] company is owned by another company, which is owned by another company, that is owned by yet another company. There are a lot of nested loops in companies. You have to get to the root of the company to find out who owns you before you make and allow a transaction.

If you look at the anti-money laundering rules and regulations around that, you need to be sure that the two entities that are entering the transactions are the right ones and they are authorized to enter the transactions…and are not part of any politically exposed list, for example.

You have to think big but implement small.

We have found that there are certain kinds of patterns, and the data is stored in various data sources. We designed a machine learning algorithm that would go and find these nested lists and get to the root very quickly without even any intervention. That’s an example of a simple use of machine learning to solve a very complex but important case of KYC.

Knowledge@Wharton: Any other examples, say from the insurance industry?

Venugopal: I’ll give you another example from insurance companies that we worked with recently. There’s … a company that serves the SMB [small, medium-sized business] property and casualty market. They have a broker that works with small businesses. If a small business wants to get [an insurance] quote, it goes to the broker, the broker goes to the underwriter, the underwriter [responds] to the broker, and the broker goes back to the customer and gives a quote. That process would take two days — and it’s something that would maybe take a few minutes in a purely digital company.

What we did is we went back and looked at what is causing this delay. And we identified it. Some of this was due to process issues, but most of it was because the required documents that the SMB or the small business was sending back to the underwriter was not being ingested correctly by the systems. And that’s where we used [our] AI-based document ingestion system that actually looked at these documents, figured out what was required, the relevant data, and pulled it and gave it to the underwriter, who could quickly make a decision.

Second, this underwriter had to look at multiple systems to get the information to determine … the quote. She was actually trying to calculate the risk there. So we designed an underwriter’s work station that used AI machine learning, and also an element of a digital assistant, which walked them through the process of doing this quote very quickly. We were able to bring [the time it takes to get a quote] down from days to minutes through the entire process. This is a good example of how we’ve used this in the insurance industry to reduce the time of interaction between SMB, a small business user, and an underwriter.

Knowledge@Wharton: What does it take to implement a hyper-personalized sales and marketing strategy? Is it very expensive and time consuming? Could you give us a sense of the scale and scope of what it takes to do something like this for a company?

Venugopal: First and foremost, you have to start with the end customer in mind. Now, if you are doing a marketing use case, start with the use case. What is it that you’re trying to do? Once you start there, then you go back and look at designing a system that can get you the outcome in the shortest amount of time without actually looking at a massive IT systems modernization project.

The way you do this is by looking at your current system, what it can do, build out the current capability, and pull out the capability that you need to build this new case into what we call the intelligent layer. An intelligent layer is where you start infusing latest and greatest technology — you mentioned cloud and cognitive. Choose the intelligent layer with cloud and cognitive and then build out this core back to your end user. And it need not be very expensive if you have a clear idea what you want to do and you really are very clear on what you want to achieve and in what time frame.

Now, what typically happens is that in an organization, you start designing all of the use cases that you need, and then you go back to your IT department or technology department and say, “Here are the 95 use cases I need to do. Build me a system.” They go off and process all these requirements for three or six months, come back and say they need four years to build it. This negotiation and this dance goes on for a year, and you don’t end up with anything in that year.

What we [recommend] is start with the use case and build out the reference architecture, which means that you’re not building this for a one-time use. You’re really, truly building it based on what your future looks like. Once you build out the reference architecture, your marginal cost for the second use case would be quite low. … We have found a tremendous amount of success with this approach, which we call front-to-back transformation.

Knowledge@Wharton: Having spoken to people in the finance functions of different companies over the years, I know that one of the issues that they often struggle with is the chief technology officer or the CIO very often will come with a fairly large budget request for investment in technology. But one of the things that they end up struggling with is how do we justify the ROI of that technology expense in business terms and in terms of the strategic business objectives of the company? So when it comes to hyper-personalization, what do you think should be the right framework for people to think about the ROI of investing in technology that could lead to hyper-personalization? Do you face these issues when you deal with companies? How do you deal with them?

Venugopal: Any chief information — often chief technology — officer should always, if they’re not already doing so, be looking at how to optimize the current set of services and create a budget for digital initiatives. That’s a whole different topic, which we call service transformation. How do you look at an existing set of services that it is offering? How can you optimize it and create some dollars for new digital projects?

By the time modernization projects are done, the industry has moved on.

Most of these projects, if you are able to tie them back to specific customer benefits, may be impactful but not large to begin with. You put together plans [to, say,] provide the next best offer [to the digital shopper in a mall, and] start with that. … It is not a huge three-year project. I can start getting results in six, nine or 12 months at most, end-to-end. And we have seen that in an even shorter amount of time — as little as three to six months — for results to come out. We have also seen organizations going down the path of picking a use case, building it out, going to the next one. And as each one gets rolled out, you learn a lot about what’s working, what’s not working, and start building on it.

The methodology that we recommend is, ‘don’t go big bang, go chunks of value.’ You have to think big but implement small. You start thinking about what your future state is going to look like to some extent, so you’re building on top of a reference architecture, which is solid and future proof. But what you’re building one at a time is an end-to-end use case.

Knowledge@Wharton: When you think big but act small, can you share what some of the results are that you have seen in terms of impact?

Venugopal: The results in as early as six months, for some cases, was a 30% savings in process improvement. And in one [earlier example] I told you about, the interaction time for a quote went down from two days to 60 minutes. These big impacts can happen in a short burst of time — typically six to nine months, we should be able to see definitive results of this sort.

Knowledge@Wharton: For financial institutions that want to start down the road to hyper-personalization of their services, where do you think they should start, and what first step should they take in their journey?

Venugopal: The first step is, in my opinion, to have a pure understanding of your customer and clearly identifying your business context. Try to understand what are the specific areas that you want to focus on as your first set of opportunities in hyper-personalization. Now, if you’re a credit card line of business (LoB) , it’s not about identifying three or four definitive use cases that you think your customers would see value. Once you identify the set of use cases, then you start thinking about which ones will have the highest impact. Then you start charting on one axis, highest impact, the other effort to execute, based on the current environment. After that, pick the ones that will have the highest impact and least amount of effort to build, at least in round one.

Start working then with your organization to start building, taking this front-to-back approach. Figure out what services are available in the core system. Start building out your intelligent layer. Figure out what technology innovations are required, and then you start building back your engagement layer and how you want to interact with a customer. This entire process in my opinion is not a very long one if you have the right people to understand the system. You could get to production in three to nine months, and we have seen as little as three months, and in some cases as long as nine months. But three to nine months, you should be able to get results while seeing your first set of hyper-personalized services come out.

Jaguar Land Rover Is Still Misfiring

The luxury automaker needs to do a lot more than cut jobs to turn around its business, including getting a grip in China.

Macron’s Yellow Vest Response Makes Putin Look Soft

Protester violence is a problem in France, but the Russian dictator is not a good example when it comes to dealing with popular discontent.

 
Toute l'info avec 20minutes.fr, l'actualité en temps réel Toute l'info avec 20minutes.fr : l'actualité en temps réel | tout le sport : analyses, résultats et matchs en direct
high-tech | arts & stars : toute l'actu people | l'actu en images | La une des lecteurs : votre blog fait l'actu