{"id":77936,"date":"2019-08-22T14:46:05","date_gmt":"2019-08-22T14:46:05","guid":{"rendered":"https:\/\/\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/"},"modified":"2019-08-22T14:46:05","modified_gmt":"2019-08-22T14:46:05","slug":"cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia","status":"publish","type":"post","link":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/","title":{"rendered":"Cerebras met 1200 milliards de transistors sur une puce IA"},"content":{"rendered":"<p><span class=\"tlid-translation translation\" lang=\"fr\">Mesurant 46 225 mm2 et optimis\u00e9 pour l\u2019intelligence artificielle, le moteur Cerebras Wafer Scale Engine (WSE) est 56,7 fois plus grand que la plus grande unit\u00e9 de traitement graphique qui mesure 815 mm2, et il contient 1200 milliards de transistors. La puce WSE, ajoute la soci\u00e9t\u00e9, offre \u00e9galement 3 000 fois plus de m\u00e9moire haute vitesse sur puce et 10 000 fois plus de bande passante m\u00e9moire.<\/span><\/p>\n<p><span class=\"tlid-translation translation\" lang=\"fr\">\u00ab\u00a0Con\u00e7u d\u00e8s le d\u00e9part pour l&rsquo;IA, le Cerebras WSE contient des innovations fondamentales qui font \u00e9voluer les technologies de pointe en r\u00e9solvant des d\u00e9fis techniques vieux de plusieurs d\u00e9cennies qui limitaient la taille des puces &#8211; telles que la connectivit\u00e9 interne, le rendement en fabrication, la fourniture de puissance, et la mise en bo\u00eetier \u00ab\u00a0, explique Andrew Feldman, fondateur et PDG de Cerebras Systems. \u00ab\u00a0Toutes les d\u00e9cisions architecturales ont \u00e9t\u00e9 prises pour optimiser les performances du travail en intelligence artificielle. Le r\u00e9sultat est que le Cerebras WSE offre, en fonction de la charge de travail, des centaines, voire des milliers de fois les performances des solutions existantes pour une fraction infime de la consommation et de l&rsquo;encombrement.\u00a0\u00bb<\/span><\/p>\n<p><span class=\"tlid-translation translation\" lang=\"fr\">En mati\u00e8re d\u2019intelligence artificielle, la taille des puces rev\u00eat une importance capitale, car les puces plus larges sont en mesure de traiter les informations plus rapidement et donc de fournir des r\u00e9ponses plus rapidement. La r\u00e9duction du temps n\u00e9cessaire \u00e0 la compr\u00e9hension, ou \u00ab\u00a0temps d&rsquo;aprentissage\u00a0\u00bb &#8211; un obstacle majeur aux progr\u00e8s de l\u2019ensemble du secteur &#8211; permet aux chercheurs de tester plus d\u2019id\u00e9es, d\u2019utiliser plus de donn\u00e9es et de r\u00e9soudre de nouveaux probl\u00e8mes.<\/span><\/p>\n<p><span class=\"tlid-translation translation\" lang=\"fr\">Les gains de performances de la puce sont obtenus en acc\u00e9l\u00e9rant tous les \u00e9l\u00e9ments de l&rsquo;aprentissage des r\u00e9seaux de neurones. Un r\u00e9seau de neurones est une boucle de r\u00e9troaction informatique \u00e0 plusieurs niveaux. Plus les entr\u00e9es sont rapides dans la boucle, plus la boucle apprend ou \u00ab\u00a0s&rsquo;entra\u00eene\u00a0\u00bb rapidement. La solution pour d\u00e9placer les entr\u00e9es dans la boucle plus rapidement consiste \u00e0 acc\u00e9l\u00e9rer le calcul et la communication au sein de la boucle.<\/span><\/p>\n<p><span class=\"tlid-translation translation\" lang=\"fr\">Avec 56,7 fois plus de surface de silicium que la plus grande unit\u00e9 de traitement graphique, le WSE fournit plus de c\u0153urs pour effectuer les calculs et plus de m\u00e9moire au plus pr\u00e8s des c\u0153urs pour que les c\u0153urs puissent fonctionner efficacement. Comme cette vaste grille de c\u0153urs et de m\u00e9moire sont int\u00e9gr\u00e9s sur une seule puce, toutes les communications sont maintenues sur la puce, offrant ainsi une bande passante exceptionnelle qui permet aux groupes de c\u0153urs de collaborer avec une efficacit\u00e9 maximale. La bande passante m\u00e9moire n&rsquo;est plus un goulot d&rsquo;\u00e9tranglement.<\/span><\/p>\n<p><span class=\"tlid-translation translation\" lang=\"fr\">La Cerebras WSE h\u00e9berge 400 000 c\u0153urs de calcul optimis\u00e9s pour l&rsquo;IA, sans cache, sans surcharge, et 18 gigaoctets de m\u00e9moire SRAM locale, distribu\u00e9e et ultra-rapide, constituant le seul et unique niveau de la hi\u00e9rarchie de la m\u00e9moire. La bande passante m\u00e9moire est de 9 p\u00e9taoctets par seconde. Les c\u0153urs sont reli\u00e9s entre eux par un r\u00e9seau de communication maill\u00e9, \u00e0 la pointe de la technologie, enti\u00e8rement \u00ab\u00a0cabl\u00e9\u00a0\u00bb, qui fournit une bande passante globale de 100 p\u00e9tabits par seconde. Plus de c\u0153urs, plus de m\u00e9moire locale et une structure \u00e0 large bande passante et \u00e0 faible temps de latence cr\u00e9ent l\u2019architecture optimale pour acc\u00e9l\u00e9rer le travail de l\u2019IA, explique la soci\u00e9t\u00e9.<\/p>\n<p>La WSE est fabriqu\u00e9e par TSMC sur sa technologie de fabrication avanc\u00e9e \u00e0 16 nm.<\/span><\/p>\n<p>A lire \u00e9galement:<\/p>\n<p><strong><a href=\"https:\/\/www.electronique-eci.com\/news\/nvidia-lance-un-module-ia-99-offrant-472-gflops\">Nvidia lance un module IA \u00e0 $99 offrant 472 GFLOPS<\/a><\/strong><\/p>\n<p><strong><a href=\"https:\/\/www.electronique-eci.com\/news\/nvidia-rachete-mellanox-pour-69-milliards\">Nvidia rach\u00e8te Mellanox pour $6.9 milliards<\/a><\/strong><\/p>\n<p><strong><a href=\"https:\/\/www.electronique-eci.com\/news\/super-ordinateur-europeen-base-sur-des-modules-atos\">Super-ordinateur europ\u00e9en bas\u00e9 sur des modules ATOS<\/a><\/strong><\/p>\n<p><strong><a href=\"https:\/\/www.electronique-eci.com\/news\/premiere-ip-de-verification-pour-pci-express-50\">Premi\u00e8re IP de v\u00e9rification pour PCI Express 5.0<\/a><\/strong><\/p>\n<p><a href=\"https:\/\/www.cerebras.net\/\">Cerebras Systems<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><b>Related articles:<\/b><br \/>\n<a href=\"https:\/\/www.smart2zero.com\/news\/tesla-nvidia-spar-over-best-autonomous-ai-chip\">Tesla, Nvidia spar over &lsquo;best&rsquo; autonomous AI chip<\/a><br \/>\n<a href=\"https:\/\/www.smart2zero.com\/news\/intel-invests-groundbreaking-ai-chip-architecture-startup\">Intel invests in &lsquo;groundbreaking&rsquo; AI chip architecture startup<\/a><br \/>\n<a href=\"https:\/\/www.smart2zero.com\/news\/ai-chip-market-set-rival-microcontrollers-2021\">AI chip market set to rival microcontrollers by 2021<\/a><br \/>\n<a href=\"https:\/\/www.smart2zero.com\/news\/steep-growth-ai-chip-market-will-produce-new-winners\">Steep growth of AI chip market will produce new winners<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>La startup Cerebras Systems sp\u00e9cialiste de l&rsquo;acc\u00e9l\u00e9ration du calcul AI  (Los Altos, Californie) a d\u00e9voil\u00e9 ce qu&rsquo;il dit \u00eatre la plus grande puce jamais construite, comprenant plus de 1 200 milliards de transistors.<\/p>\n","protected":false},"author":22,"featured_media":77937,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[883],"tags":[920,912,896,905,906,908,890,917],"domains":[47],"ppma_author":[1149],"class_list":["post-77936","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technologies","tag-artificialintelligence-fr","tag-dataacquisition-fr","tag-digitalsignalprocessing-fr","tag-memory-data-storage-fr","tag-mpus-mcus-fr","tag-plds-fpgas-asics-fr","tag-powermanagement-fr","tag-software-embedded-tools-fr","domains-electronique-eci"],"acf":[],"yoast_head":"<title>Cerebras met 1200 milliards de transistors sur une puce IA ...<\/title>\n<meta name=\"description\" content=\"La startup Cerebras Systems sp\u00e9cialiste de l&#039;acc\u00e9l\u00e9ration du calcul AI  (Los Altos, Californie) a d\u00e9voil\u00e9 ce qu&#039;il dit \u00eatre la plus grande puce jamais...\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/posts\/77936\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Cerebras met 1200 milliards de transistors sur une puce IA\" \/>\n<meta property=\"og:description\" content=\"La startup Cerebras Systems sp\u00e9cialiste de l&#039;acc\u00e9l\u00e9ration du calcul AI (Los Altos, Californie) a d\u00e9voil\u00e9 ce qu&#039;il dit \u00eatre la plus grande puce jamais construite, comprenant plus de 1 200 milliards de transistors.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/posts\/77936\/\" \/>\n<meta property=\"og:site_name\" content=\"EENewsEurope\" \/>\n<meta property=\"article:published_time\" content=\"2019-08-22T14:46:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.ecinews.fr\/wp-content\/uploads\/import\/default\/files\/sites\/default\/files\/images\/2019-08-20_ai-chip_wse_trillion_transistor_cerebras_systems.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"600\" \/>\n\t<meta property=\"og:image:height\" content=\"350\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"eeNews Europe\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"eeNews Europe\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/\"},\"author\":{\"name\":\"eeNews Europe\",\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#\/schema\/person\/9eff4051fa9dac8230052de45e32b0f4\"},\"headline\":\"Cerebras met 1200 milliards de transistors sur une puce IA\",\"datePublished\":\"2019-08-22T14:46:05+00:00\",\"dateModified\":\"2019-08-22T14:46:05+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/\"},\"wordCount\":687,\"publisher\":{\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#organization\"},\"keywords\":[\"ArtificialIntelligence\",\"DataAcquisition\",\"DigitalSignalProcessing\",\"Memory &amp; Data Storage\",\"MPUs\/MCUs\",\"PLDs\/FPGAs\/ASICs\",\"PowerManagement\",\"Software &amp; Embedded tools\"],\"articleSection\":[\"Technologies\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/\",\"url\":\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/\",\"name\":\"Cerebras met 1200 milliards de transistors sur une puce IA -\",\"isPartOf\":{\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#website\"},\"datePublished\":\"2019-08-22T14:46:05+00:00\",\"dateModified\":\"2019-08-22T14:46:05+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.ecinews.fr\/fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Cerebras met 1200 milliards de transistors sur une puce IA\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#website\",\"url\":\"https:\/\/www.eenewseurope.com\/en\/\",\"name\":\"EENewsEurope\",\"description\":\"Just another WordPress site\",\"publisher\":{\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.eenewseurope.com\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#organization\",\"name\":\"EENewsEurope\",\"url\":\"https:\/\/www.eenewseurope.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.ecinews.fr\/wp-content\/uploads\/2022\/02\/logo-1.jpg\",\"contentUrl\":\"https:\/\/www.ecinews.fr\/wp-content\/uploads\/2022\/02\/logo-1.jpg\",\"width\":283,\"height\":113,\"caption\":\"EENewsEurope\"},\"image\":{\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#\/schema\/person\/9eff4051fa9dac8230052de45e32b0f4\",\"name\":\"eeNews Europe\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.eenewseurope.com\/en\/#\/schema\/person\/image\/fae8f0cb15861c4ae0ed4872e2c9fc22\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5081509054e28b04ecd976976e723ce0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5081509054e28b04ecd976976e723ce0?s=96&d=mm&r=g\",\"caption\":\"eeNews Europe\"}}]}<\/script>","yoast_head_json":{"title":"Cerebras met 1200 milliards de transistors sur une puce IA ...","description":"La startup Cerebras Systems sp\u00e9cialiste de l'acc\u00e9l\u00e9ration du calcul AI  (Los Altos, Californie) a d\u00e9voil\u00e9 ce qu'il dit \u00eatre la plus grande puce jamais...","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/posts\/77936\/","og_locale":"fr_FR","og_type":"article","og_title":"Cerebras met 1200 milliards de transistors sur une puce IA","og_description":"La startup Cerebras Systems sp\u00e9cialiste de l'acc\u00e9l\u00e9ration du calcul AI (Los Altos, Californie) a d\u00e9voil\u00e9 ce qu'il dit \u00eatre la plus grande puce jamais construite, comprenant plus de 1 200 milliards de transistors.","og_url":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/posts\/77936\/","og_site_name":"EENewsEurope","article_published_time":"2019-08-22T14:46:05+00:00","og_image":[{"width":600,"height":350,"url":"https:\/\/www.ecinews.fr\/wp-content\/uploads\/import\/default\/files\/sites\/default\/files\/images\/2019-08-20_ai-chip_wse_trillion_transistor_cerebras_systems.jpg","type":"image\/jpeg"}],"author":"eeNews Europe","twitter_card":"summary_large_image","twitter_misc":{"Written by":"eeNews Europe","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/#article","isPartOf":{"@id":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/"},"author":{"name":"eeNews Europe","@id":"https:\/\/www.eenewseurope.com\/en\/#\/schema\/person\/9eff4051fa9dac8230052de45e32b0f4"},"headline":"Cerebras met 1200 milliards de transistors sur une puce IA","datePublished":"2019-08-22T14:46:05+00:00","dateModified":"2019-08-22T14:46:05+00:00","mainEntityOfPage":{"@id":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/"},"wordCount":687,"publisher":{"@id":"https:\/\/www.eenewseurope.com\/en\/#organization"},"keywords":["ArtificialIntelligence","DataAcquisition","DigitalSignalProcessing","Memory &amp; Data Storage","MPUs\/MCUs","PLDs\/FPGAs\/ASICs","PowerManagement","Software &amp; Embedded tools"],"articleSection":["Technologies"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/","url":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/","name":"Cerebras met 1200 milliards de transistors sur une puce IA -","isPartOf":{"@id":"https:\/\/www.eenewseurope.com\/en\/#website"},"datePublished":"2019-08-22T14:46:05+00:00","dateModified":"2019-08-22T14:46:05+00:00","breadcrumb":{"@id":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.ecinews.fr\/fr\/cerebras-met-1200-milliards-de-transistors-sur-une-puce-ia\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.ecinews.fr\/fr\/"},{"@type":"ListItem","position":2,"name":"Cerebras met 1200 milliards de transistors sur une puce IA"}]},{"@type":"WebSite","@id":"https:\/\/www.eenewseurope.com\/en\/#website","url":"https:\/\/www.eenewseurope.com\/en\/","name":"EENewsEurope","description":"Just another WordPress site","publisher":{"@id":"https:\/\/www.eenewseurope.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.eenewseurope.com\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/www.eenewseurope.com\/en\/#organization","name":"EENewsEurope","url":"https:\/\/www.eenewseurope.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.eenewseurope.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/www.ecinews.fr\/wp-content\/uploads\/2022\/02\/logo-1.jpg","contentUrl":"https:\/\/www.ecinews.fr\/wp-content\/uploads\/2022\/02\/logo-1.jpg","width":283,"height":113,"caption":"EENewsEurope"},"image":{"@id":"https:\/\/www.eenewseurope.com\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.eenewseurope.com\/en\/#\/schema\/person\/9eff4051fa9dac8230052de45e32b0f4","name":"eeNews Europe","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.eenewseurope.com\/en\/#\/schema\/person\/image\/fae8f0cb15861c4ae0ed4872e2c9fc22","url":"https:\/\/secure.gravatar.com\/avatar\/5081509054e28b04ecd976976e723ce0?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5081509054e28b04ecd976976e723ce0?s=96&d=mm&r=g","caption":"eeNews Europe"}}]}},"authors":[{"term_id":1149,"user_id":22,"is_guest":0,"slug":"eenews-europe","display_name":"eeNews Europe","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/5081509054e28b04ecd976976e723ce0?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/posts\/77936"}],"collection":[{"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/users\/22"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/comments?post=77936"}],"version-history":[{"count":0,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/posts\/77936\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/media\/77937"}],"wp:attachment":[{"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/media?parent=77936"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/categories?post=77936"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/tags?post=77936"},{"taxonomy":"domains","embeddable":true,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/domains?post=77936"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.ecinews.fr\/fr\/wp-json\/wp\/v2\/ppma_author?post=77936"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}